Sunday, July 22, 2012

Core dump file not found

Has it ever occurred to you that you ran a program, it aborted saying 'core dumped' , but then when you go around looking for the core dump file, you cant find it anywhere? 0-o

Well it happened to me today. I tried finding the 'core' file in each of the possible directories - the pwd, the executable directory, all the directories in the $PATH variable.

To my dismay, none of them had the core file in them.

After some google search, I found that there may be cases where the core file is not created to save space. (Core files are usually large ~1GB ).

I also found out that in my system (Ubuntu 12.04 LTS), the default size allocated to core files is 0.

This can be found using the command

ulimit -a


 If you see ulimit -c 0 , that means 0 blocks have been allocated to core dump file , and hence core dump file is not formed.

To enable the core dump, we can do the following :


ulimit -c unlimited

This increases the coredump size to unlimited. Now, try executing the code that caused Aborted (Core dump formed).

and check the pwd (present working directory) for 'core'  file. You will notice that core file is very large. And hence I feel that in Ubuntu, as  a protective measure, the core dump file has been allocated 0 blocks by default.

NExT up -> tips on how to use objdump for debugging through core file. :)

Wednesday, July 18, 2012

binary files diff and patch

I wanted to send this large sized data from a server to a client multiple times.
Though the change in data in every iteration was negligible, it was essential that the copy be made exactly similar on both client and server.

The time taken to transfer this data was huge. We had to decrease this. Hence it was required that only changes made in the file be diff 'ed  and this file be scp'ed over the client.

Also, at the client end, the file had to be patch'ed, so that the client had the exact same copy as the server.

Heres how we go about this:

1> Diff the file

diff old_file new_file > patch_file   

// old_file was created in previous iteration and new_file created in current iteration.

2> copy the patch file to the client

scp patch_file client@client_IP:patch_file



3> AT CLIENT END :: patch the file in the client

patch old_file patch_file > new_file

for more information please see man patch and man diff.

In order to do this for binary files, you need to install bsdiff.

for Ubuntu use

sudo apt-get install bsdiff


then do ::

bsdiff old_file new_file patch_file
scp patch_file client@clientip:
AT CLIENT END:
bspatch old_file patch_file > new_file.

NOTE :: Check whether the checksum of the newly created file is same as that of the previous file.
This can be done using cksum command.

If they are not the same, you can use jdiff (source code given here :: http://jojodiff.sourceforge.net/)

Also, note that jdiff and jpatch, the two binaries that can be used from the source code are compiled on 32 bit kernel. (use file command for checking the binary type)

For kernel version 3 and above, this will work perfectly fine. However if you plan to use jdiff and jpatch for machines with kernel level 2.6 or so, you will need to recompile it for 64 bit machine.

This regenerates jdiff and jpatch with 1-2 trivial error resolutions.

bsdiff  OR  jdiff, which one is better?

bsdiff takes more time but uses internal compression algorithm for creating small sized patch. It goes without saying the time taken to generate the patch is more.

jdiff on the other hand takes less time, generates larger patch, but gives you the gurantee that the final file after patch is exact replica of the original binary file (verified using cksum ). Also, it is meant for 32 bit OS.