zram is a driver in linux kernel. It compress the content in memory to reduce the pages used by application.
modprobe zram num_devices=1
# Now you have a /dev/zram0
echo 8G > /sys/block/zram0/disksize
# Must set size before using it
mkfs.ext4 -I 128 -m 0 /dev/zram0
# /dev/zram0 is only a block device therefore we need to create a simple filesystem on it
mount /dev/zram0 /mnt/
# Now you can put some files into /mnt/ and you will find that they will occupied less space.
But that’s not the only way we could use zram. Furthermore, we could use zram with tcmalloc to reduce user application’s cost of memory.
LD_PRELOAD="/usr/lib/libtcmalloc.so" TCMALLOC_MEMFS_MALLOC_PATH=/mnt/ redis-server
Now we make redis-server to use memory in zram. If we use “lsof” to check the redis-server, it will show:
redis-ser 16648 XXX DEL REG 253,0 11 /mnt/.DlM24E
redis-ser 16648 XXX 3u REG 253,0 1052672 11 /mnt/.DlM24E (deleted)
That’s the file created by tcmalloc library, and it has already be compressed.
I worked in Alibaba Group for more than 9 years. Recently I am working in Alimama, a sub-company of Alibaba Group and has been the biggest Advertisement Publishing Company in China. At present, we need C++/Java developers to build new back-end basic services for our new business.
Role: C++/Java Developer for storage system or high performance computing
1. Building and optimizing the distributed key-value storage system
2. Building and optimizing the distributed computing engine of Linear Regression algorithm
3. Building and maintaining the backend service for Advertisement Publishing System
Skins & experience required:
1. Familiar with storage system or hight performance computing system
2. Strong background about Redis/Rocksdb/Hadoop/Glusterfs
3. Very familiar with one of C/C++/Java/Scala language
4. More than 3 years experience about storage system or HPC as a developer
5. Passionate about new Technologies and wanting to continuously push the boundaries
Any one who is interesting in the job above could send email to my email: firstname.lastname@example.org
1. If we see this error report:
Container XXX is running beyond virtual memory limits
The solution is here, the heap size of Java should not be bigger than map/reduce memory. The Cloudera recommends the head size prefer to be 0.8 of the map/reduce memory, such as:
2. The directory of “/tmp/” became full.
This is usually caused by spilled data from map output. This article introduced the whole overview of Map/Reduce algorithm in Hadoop with a detailed and clear picture.
As a result, my solution is adding this configuration:
into core-site.xml, so the inevitable spill data will be write into different disks for load balance.
3. Don’t use more than 0.8 of physical memory as “yarn.nodemanager.resource.memory-mb”, or it will cause unexpected fail for jobs.
4. If we launch too many map jobs or reduce jobs more than physical cores of servers, it may lead to tremendous timeouts for these jobs. Therefore, adjust the “mapreduce.map.memory.mb” and “mapreduce.reduce.memory.mb” carefully to limit the number of map/reduce jobs.
5. If you notice that all the CPU cores are full in Hadoop cluster, that does not mean we can’t do optimizations anymore. By using perf, I find out system waste too many times on launching and stopping java task (or containers):
So I change the value of “mapreduce.input.fileinputformat.split.minsize” to 8GB for reducing the number of mappers. After decrease the number of mappers from thousands to hundreds, the running time of Terasort program drop down more than 50% (Also the Context Switch of system fall from ten thousands per second to thousands). Therefore, adjust the number of java tasks close to the number of physical CPU cores is a better solution.