MapReduce: Compression and Input Splits


This is something that always rise doubts:

When considering compressed data that will be processed by MapReduce, it is important to check if the compression format supports splitting. If not, the number of map tasks may not be the expected.

Let’s suppose an uncompressed file stored in HDFS whose size is 1 GB: With a HDFS block size of 64 MB, the file will be stored as 16 blocks, and a MapReduce job using this file as input will create 16 input splits, each processed independently as input to a separate map task.

Now if the file is a gzip-compressed file whose compressed size is 1 GB: As before, HDFS will store the file as 16 blocks. But, creating a split for each block will not work since it is impossible to start reading at an arbitrary point in the gzip stream, and therefore impossible for a map task to read its split independently of the others.

In this case, MapReduce will not try to split the gzipped file, since it knows that the input is gzip-compressed (by looking at the filename extension) and that gzip does not support splitting.

At this scenario a single map will process the 16 HDFS blocks, most of which will not be local to the map (it will have additionally a data locality cost).

This Job, will not parallelize as expected, it will be less granular, and so may take longer to run.

The gzip format uses DEFLATE to store the compressed data, and DEFLATE stores data as a series of compressed blocks. The problem is that the start of each block is not distinguished in any way that would allow a reader positioned at an arbitrary point in the stream to advance to the beginning of the next block, thereby synchronizing itself with the stream. For this reason, gzip does not support splitting.

Here we have a summary of compression formats:

hadoop_spplitable_formats(a)  DEFLATE is a compression algorithm whose standard implementation is zlib. There is no commonly available command-line tool for producing files in DEFLATE format, as gzip is normally used. (Note that the gzip file format is DEFLATE with extra headers and a footer.) The .deflate filename extension is a Hadoop convention.

Source: Hadoop The Definitive Guide.

 

yarn: change configuration and restart node manager on a live cluster


This procedure is to change Yarn configuration on a live cluster, propagate the changes to all the nodes and restart Yarn node manager.

Both commands are listing all the nodes on the cluster and then filtering the DNS name to execute a remote command via SSH. You can customize the sed filter depending on your own needs. This is filtering DNS names with Elastic Mapreduce format (ip-xx-xx-xx-xx.eu-west-1.compute.internal).

1. Upload the private key (.pem) file you are using to access the master node on the cluster. Change the private key permissions to at least 600 (i.e chmod 600 MyKeyName.pem)

2.  Change /conf/yarn-site.xml and use a command like this to populate the change across the cluster.

yarn node -list|sed -n "s/^\(ip[^:]*\):.*/\1/p" | xargs -t -I{} -P10 scp -o StrictHostKeyChecking=no -i ~/MyKeyName.pem ~/conf/yarn-site.xml hadoop@{}://home/hadoop/conf/

3. This command will restart Yarn Node Resource manager on all the nodes.

 yarn node -list|sed -n "s/^\(ip[^:]*\):.*/\1/p" | xargs -t -I{} -P10 ssh -o StrictHostKeyChecking=no -i ~/MyKeyName.pem hadoop@{} "yarn nodemanager stop"

 

Hadoop 1 vs Hadoop 2 – How many slots do I have per node ?


This is a topic that always rise a discussion…

In Hadoop 1, the number of tasks launched per node was specified via the settings mapred.map.tasks.maximum and mapred.reduce.tasks.maximum.

But this is ignored when set on Hadoop 2.

In Hadoop 2 with YARN, we can determine how many concurrent tasks are launched per node by dividing the resources allocated to YARN by the resources allocated to each MapReduce task, and taking the minimum of the two types of resources (memory and CPU).

This approach is an improvement over that of Hadoop 1, because the administrator no longer has to bundle CPU and memory into a Hadoop-specific concept of a “slot”.

The number of tasks that will be spawned per node:

min(
    yarn.nodemanager.resource.memory-mb / mapreduce.[map|reduce].memory.mb
    ,
    yarn.nodemanager.resource.cpu-vcores / mapreduce.[map|reduce].cpu.vcores
    )

Obtained value will be set on the variable ‘mapreduce.job.maps‘ on the ‘mapred-site.xml‘ file.

Of course, YARN is more dynamic than that, and each job can have unique resource requirements — so in a multitenant cluster with different types of jobs running, the calculation isn’t as straightforward.

More information:
http://blog.cloudera.com/blog/2014/04/apache-hadoop-yarn-avoiding-6-time-consuming-gotchas/

Testing Java Cryptography Extension (JCE) is installed


If JCE is already installed, you should see on that the jar files ‘local_policy.jar’ and ‘US_export_policy.jar’ are on $JAVA_HOME/jre/lib/security/

But, we can test it:

import javax.crypto.Cipher;
import java.security.*;
import javax.crypto.*;

class TestJCE {
 public static void main(String[] args) {
 boolean JCESupported = false;
 try {
    KeyGenerator kgen = KeyGenerator.getInstance("AES", "SunJCE");
    kgen.init(256);
    JCESupported = true;
 } catch (NoSuchAlgorithmException e) {
    JCESupported = false;
 } catch (NoSuchProviderException e) {
    JCESupported = false;
 }
    System.out.println("JCE Supported=" + JCESupported);
 }
} 

To compile (assuming file name is TestJCE.java):

$ javac TestJCE.java

Previous command will create TestJCE.class output file.

To Interpreting and Running the program:

$ java TestJCE

 

Actualizar OpenSSL / Update to 1.0.1g


Actualizar OpenSSL a la utilma version en tres pasos:

1) compilamos e instalamos la ultima version de openssl version:
$ sudo curl https://www.openssl.org/source/openssl-1.0.1g.tar.gz | tar xz && cd openssl-1.0.1g && sudo ./config && sudo make && sudo make install_sw

2) Reemplazamos la vieja libreria openssl por la nueva con un link simbolico
$ sudo ln -sf /usr/local/ssl/bin/openssl `which openssl`

3) Probamos:

$ openssl version

Deberia devolver:

OpenSSL 1.0.1g

 

Stress Test: Bees With Machine Guns !


Hace unos días probé una herramienta sumamente interesante: Bees With Machine Guns !!

Esta es una herramienta para realizar pruebas de stress sobre los servicios Load Balancer y Autoscaling de Amazon AWS.

Luego de armar nuestra estructura de servidores, podremos generar una prueba de carga con las abejas. De esta manera, veremos actuar al servicio de Autoscaling creando nuevas instancias de nuestro servidor o decrementando las instancias si la carga disminuye.

Instalación:

$ git clone git://github.com/newsapps/beeswithmachineguns.git
$ easy_install beeswithmachineguns

Creamos archivo de Credenciales:

[Credentials]
aws_access_key_id = <your access key>
aws_secret_access_key = <your secret key>

Estas credenciales deben colocarse en el archivo .boto en nuestro home. Conteniendo la key y secret key que utilicemos en nuestra cuenta de Amazon AWS. Estas credenciales serán utilizadas por la aplicacion para crear las abejas, que no son otra cosa que instancias EC2.

Utilización:

bees up -s 4 -g public-sg -k hvivani-virg-1
bees attack -n 10000 -c 250 -u http://loadbalancer.hvivani.com/
bees down

La primera linea crea 4 abejas (instancias ec2) utilizando las credenciales del archivo .boto junto con los permisos seteados en ‘public-sg’ (Security Group definido en la región) y la key ‘hvivani-virg-1’ (llave privada utilizada para conectar a cualquier instancia en la región).

La segunda linea llama a las 4 abejas a atacar el sitio http://loadbalancer.hvivani.com/ con 10000 solicitudes de a 250 cada vez.

La ultima linea elimina las abejas (termina las instancias de ataque).

A jugar …

Ejecución de comandos remotos con sudo / Execute remote commands with sudo


Hace unos días necesitaba ejecutar un par de comandos en un servidor remoto, para lo cuál tenemos una sintáxis como esta:

$ ssh -p66 hvivani@server "cd /home/hvivani/backup/; ls -l"

Vean que separamos los comandos que queremos ejecutar con “;”

Ahora bien, que pasa si necesito ejecutar algo así ?

$ ssh -p66 hvivani@server "cd /etc;sudo vi sudoers"

Obtendremos el siguiente error:

hvivani@server's password: 
sudo: sorry, you must have a tty to run sudo

Para ejecutar comandos remotos con sudo por ssh, deberemos utilizar el parámetro “-t” que creará una pseudo terminal tty para permitirnos la ejecución:

$ ssh -t -p66 hvivani@server "cd /etc;sudo vi sudoers"

Mi historia con Redhat/Fedora – My history with Redhat/Fedora


Versión en Español

Aquí un extracto de un mail donde detallaba mi historia con Fedora para la gente de Gulbac:

Fedora es una distro que tiene gente que la ama y gente que la odia porque allá por el 2003 Redhat decidió dividir las aguas entre lo comercial y lo comunitario.
Lamentablemente, la manera en que lo manejó no fué la mas prolija (al menos así se vió desde nuestro país, y desde mi punto de vista) y muchos se fueron a Debian u otras distros.
Yo venía trabajando desde los 90 con Redhat, inclusive instalé, en el 99, una red dual completa en el Colegio Peralta Ramos de Mar del Plata, porque iban a dar clases a los chicos con Redhat 7.

En el 2003, comencé con un desarrollo de Punto de Venta bajo linux con impresoras fiscales para la empresa donde trabajo ahora, y de golpe Redhat anuncia que no va a haber un Redhat 10 y que hay que tomar una decisión importante: O pagarle a Redhat por una versión estable con soporte, o instalar Fedora que es soportada por la comunidad.
Como ya había arrancado con el desarrollo, y como siempre, con cero presupuesto, seguí con Fedora. De hecho los primeros puntos de venta bajo linux que pusimos en la calle corrían con Fedora Core 1 y 2.
Pero, en ese momento, el riesgo fué muy alto. Si el proyecto moría, moría yo también… no se si me entienden.

Esos puntos de venta fueron premiados en el 2004 por su grado de innovación al ser uno de los primeros desarrollos privados para impresores fiscales bajo Linux.

La realidad hoy es que la comunidad Fedora desarrolla y luego redhat comercializa las versiones estables. Esto ha servido para mantener viva la distro, ya que la faz comercial es como el caballo que tira del carruaje. Hay un mercado de empresas que consumen Redhat, y por otro lado estamos los que remamos sin presupuesto (hablando de empresas) y usamos Fedora y/o CentOS.

Hoy en día sigo trabajando sobre la misma distro (hasta el 2010 tuve un servidor corriendo con Fedora Core 2 sin inconvenientes) mas que nada por que la conozco y le conozco las mañas, y he tenido la oportunidad de conocer la comunidad, que le ha dado mucho al software libre. La verdad es que no dejo de sorprenderme de como estos engranajes siguen haciendo girar este mundo contra viento y marea.
Por otro lado, no dejo de admirar y respetar al resto de las distros, ya que se que en el mundo Linux se trabaja poniendo huevos. No se hace nada sin esfuerzo o solo con labia.

Salud ! 


Here is an extract from an email detailing my story with Fedora for Gulbac’s people :

Fedora is a distro that has people who love it and people who hate it because back in 2003 Redhat decided to divide the waters between commercial and community versions.
Unfortunately , the way they handled it was not the most verbose (at least that was how we seen it from our country and from my point of view ) and many changed to Debian or other distros.
I had been working with Redhat since 90’s, in fact, at 99, I installed a full dual network at Peralta Ramos College of Mar del Plata, because they were going to teach the kids with Redhat 7.

In 2003 , I started with the development of POS under linux with fiscal printers for the company where I work now , and suddenly Redhat announces that there will be not a Redhat 10 and you have to make an important decision : Pay to Redhat for a stable supported version , or install Fedora which is supported by the community.
Since I had already started to develop, and as always, with zero budget, continued with Fedora. In fact the first outlets under linux we put on the streets ran with Fedora Core 1 and 2.
But , at that time, was very high risk . If the project died, I died too … Do I explain myself ?.

Those outlets were awarded in 2004 by its degree of innovation to be one of the first private development for fiscal printers under Linux.

The reality today is that the Fedora community develops and then Redhat commercializes the stable versions . This has served to keep alive the distro , as the commercial aspect is like the horse pulling the carriage. There is a market for companies who use Redhat, and, on the other hand, are those who paddled without budget ( talking about companies ) and use Fedora and / or CentOS .

Today I still work on the same distro (until 2010 had a server running Fedora Core 2 smoothly) just because I know it and I know the tricks it has, and have had the opportunity to meet the community, that has given much to free software. The truth is that I am inspired by how these gears are turning the world against all odds .
On the other hand, I keep my admiration and respect for the other distros , because I know that in the Linux world, everything is made with balls. Nothing is done without effort or only with quackery .

Cheers !