Colnago C96 Rabobank Team paint scheme

Starting restoration of this Colnago C96.

Just arrived from Netherlands. Rabobank team paint scheme.

Seat tube (A) = 55 cm (C-C), 57 cm (C-T)
Top tube (B) = 55.5 cm (C-C)
Head tube (D) = 14.1 cm
Rear spacing = 130 mm
Columbus Thron Super tubes

Linux Kernel Tuning: page allocation failure

If you start seeing these errors it means your server or instance started running out of kernel memory.

[10223.291166] java: page allocation failure: order:0, mode:0x1080020(GFP_ATOMIC), nodemask=(null)
[10223.301794] java cpuset=/ mems_allowed=0-1
[10223.307211] CPU: 29 PID: 19395 Comm: java Not tainted 4.14.154-99.181.amzn1.x86_64 #1
[10223.315658] Hardware name: Xen HVM domU, BIOS 08/24/2006
[10223.322004] Call Trace:
[10223.325230]  <IRQ>
[10223.328193]  dump_stack+0x66/0x82
[10223.332213]  warn_alloc+0xe0/0x180

In particular, these Order 0 (zero) errors, mean there isn’t even a single 4K page available to allocate.

This might sound weird on a system were we have a lot of RAM memory available. And actually, this may be a common situation on systems where the kernel is not tuned up properly.

In particular, we need to look at the following kernel parameter:


This is used to force the Linux VM to keep a minimum number
of kilobytes free.  The VM uses this number to compute a
watermark[WMARK_MIN] value for each lowmem zOn one in the system.
Each lowmem zone gets a number of reserved free pages based
proportionally on its size.

Some minimal amount of memory is needed to satisfy PF_MEMALLOC
allocations; if you set this to lower than 1024KB, your system will
become subtly broken, and prone to deadlock under high loads.

Setting this too high will OOM your machine instantly.

On systems with very large amount of RAM memory, this parameter is usually set too low. Change default value (have a look to the previous paragraph to avoid too low or too high values) and restart with sysctl. 1GB is the value I use on most of the large memory servers (64GB+).

sudo sed -i '${s/$/'"\nvm.min_free_kbytes = 1048576"'/}' /etc/sysctl.conf
sysctl vm.min_free_kbytes

echo "reloading the settings:"
sudo /sbin/sysctl -p

Eddy Merckx Corsa-01

Starting the restoring process on this Eddy Merckx Corsa-01.

Size 51. White pearl with read and blue lines.

Zerouno Acciaio 18 MCDV6 tubing

Campagnolo Athena components.
Braze-ons for STI or Ergopower cable guides on the side of the head tube.

Decoded serial number:
HC0 G766
H – EMC employee
C – Corsa
0 – frame size (50cm)
G766 – serial number, frame built in early 1996
The downtube and seat tube are ovalized at the bottom bracket for extra stiffness.

Decoding Eddy Merckx frames

Eddy Merckx Bottom Bracket. Corsa (C) size 50 (0) early 1996 (G). EMC employee “H”.

The symbols to the left of the BB cover (“technical”) are divided into 3 categories:

  • A letter denoting the EMC employee responsible for the final “smoothing” of the frame before painting (A,B,F,G,M,P,T,D,Y,L,N,J,S,H,K, unusual 0, ^)
  • A number indicating the length of the seat tube in cm measured c-c (for example 2 means 52 or 62, 8 means 48 or 58)
  • Letters indicating tube type, geometry or model:
    R = Reynolds 531
    C = Corsa
    X = SLX/SPX
    CX = Criterium
    TT = TSX
    M = Strada (Matrix/Cromor)
    TTB = Century TSX
    XB = Corsa Extra SLX century geo
    WW = Strada since 1992

The symbols to the right of the BB cover (“statistical”) form a serial number, it consists of a letter and a set of digits. A letter means another series of frames, a number is another frame in the series (001-9999). The exception is the production from 1980 (there is no letter, and in the prototypes even digits, and there are just over 1000 of them).
E – 1981-1984
Z – 1984-1986
A – 1986-1988
B – 1988-1990
C – 1990-1991
D – 1992-1993
F – 1994-1995
G – 1996-1998
H – 1998-2000
J – 2001-2002
K – 2002-2004
L – 2004-2006
P – 2006-2008

In addition to such markings, there are unusual ones:
CS – Capri Sonne
ED – Europ Decor
W – Winning ?
KE – Kelme
HL186P – Hans Lubberding 1986 Pista (Panasonic) and similar – his teammates

Model Weight Chart


Amazon Project Kuiper – job openings in my team

Are you looking for new challenges?
At Project Kuiper we are working to provide broadband internet service to tens of millions of people around the world who are currently underserved. Come join us!

Do you want to know more about this project? have a look to this video:

Here are some of our current openings. Feel free to reach out directly to me if you want to know more about these or other positions.

Software Engineer

Senior Systems Development Engineer

Senior Ecad Tools Application Engineer

Systems Dev Engineer Enterprise Engineering

Atlassian Support Engineer

Systems Engineer

EBS Storage Performance Notes – Instance throughput vs Volume throughput

I just wanted to write a couple lines/guidance on this regard as this is a recurring question when configuring storage, not only in the cloud, but can also happen on bare metal servers.

What is throughput on a volume?

Throughput is the measure of the amount of data transferred from/to a storage device per time unit (typically seconds).

The throughput consumed on a volume is calculated using this formula:

IOPS (IO Ops per second) x BS (block size)= Throughput

As example, if we are writing at 1200 Ops/Sec, and the chunk write size is around 125Kb, we will have a total throughput of about 150Mb/sec.

Why is this important?

This is important because we have to be aware of the Maximum Total Throughput Capacity for a specific volume vs the Maximum Total Instance Throughput.

Because, if your instance type (or server) is able to produce a throughput of 1250MiB/s (i.e M4.16xl)) and your EBS Maximum Throughput is 500MiB/s (i.e. ST1), not only you will hit a bottleneck trying to write to the specific volumes, but also throttling might occur (i.e. EBS on cloud services).

How do I find what is the Maximum throughput for EC2 instances and EBS volumes?

Here is documentation about Maximum Instance Throughput for every instance type on EC2:

And here about the EBS Maximum Volume throughput:

How do I solve the problem ?

If we have an instance/server that has more throughput capabilities than the volume, just add or split the storage capacity into more volumes. So the load/throughput will be distributed across the volumes.

As an example, here are some metrics with different volume configurations:

1 x 3000GB – 9000IOPS volume:


3 x 1000GB – 3000IOPS volume:


Look at some of the metrics: these are using the same instance type (m4.10xl – 500Mb/s throughput), same volume type (GP2 – 160Mb/s throughput) and running the same job:

  • Using 1 volume, Write/Read Latency is around 20-25 ms/op. This value is high compared to 3x1000GB volumes.
  • Using 1 volume, Avg Queue length 25. The queue depth is the number of pending I/O requests from your application to your volume. For maximum consistency, a Provisioned IOPS volume must maintain an average queue depth (rounded to the nearest whole number) of one for every 500 provisioned IOPS in a minute. On this scenario 9000/500=18. Queue length of 18 or higher will be needed to reach 9000 IOPS.
  • Burst Balance is 100%, which is Ok, but if this balance drops to zero (it will happen if volume capacity keeps being exceeded), all the requests will be throttled and you’ll start seeing IO errors.
  • On both scenarios, Avg Write Size is pretty large (around 125KiB/op) which will typically cause the volume to hit the throughput limit before hitting the IOPS limit.
  • Using 1 volume, Write throughput is around 1200 Ops/Sec. Having write size around 125Kb, it will consume about 150Mb/sec. (IOPS x BS = Throughput)

1968 Chevrolet Camaro SS Coupe – 4-Speed Manual True SS car

Some time ago I was given the opportunity to be the curator/keeper/custodian of this historic car.

I like to think that, even if you own a car like this, your goal is to take care of it, and pass it along to the next generations when the time comes.

235,147 Chevrolet Camaros were produced in 1968. This Original SS is one of only 12,496 L48 SS Camaros produced that year, which represents only 5% of total 1968 Camaro production.

She was born on Thursday, Jun 13th, 1968 at the Norwood’s GM plant in Cincinnati, Ohio.

According to the original warranty booklet with protect-o-plate, she was delivered to Joe O’Brien Chevrolet in Cleveland, OH, and the first owner on a Saturday, Jun 29th, 1968, also in Ohio.

She is a True SS, All matching numbers, equipped with the following factory options:

  • L48 350 small block V8/295HP.
  • Muncie M22 close-ratio 4-speed manual
  • Console
  • Decor group
  • Power steering (N40)
  • Power front disc brakes (J52)
  • SoftRay tinted windshield
  • Lemans Blue Paint (UU Code)
  • White front band, white fender stripes (D90 Code)
  • Posi traction 12 Bolt (G80 Code)

Original (matching numbers) L48 350ci V8/295HP motor currently on a stand. Have a look here to see it running.
Transmission casting 3925660 – M22 Muncie close ratio
Currently installed motor is a 2017 GM ZZ4 crate 350ci 4 bolt main forged crank, rods & pistons with high performance aluminum heads fuel injected (FiTech). 450HP

Original interior


Walk through and start up video (start up at 2:49):

All Camaro Show 2021 Class Winner
24th Annual Cruise the Narrows Car Show Sponsored Car

log4j vulnerability – quick notes

This issue may lead to remote code execution (RCE) via use of JNDI.

  • Vendor: Apache Software Foundation
    • Product: Apache Log4j
      • <=2.14.1: affects 2.14.1 and prior versions

Fix: log4j 2.15.0

To list all JAR files on your system containing the vulnerable class file (including fat JARs), you can use:

for f in $(find . -name '*.jar' 2>/dev/null); do echo "Checking $f…"; unzip -l "$f" | grep -F org/apache/logging/log4j/core/lookup/JndiLookup.class; done

Additional details here: