I\'m wondering about what git is doing when it pushes up changes, and why it seems to occasionally push way more data than the changes I\'ve made. I made some changes to two
When I went to push that data up to origin, git turned that into over 47mb of data..
Looks like your repository contains a lot of binaries data.
git-push- Update remote refs along with associated objects
associated objects?After each commit you do git perform a pack of your data into files named
XX.pack && `XX.idx'
A good reading about the packing is here
The packed archive format
.packis designed to be self-contained so that it can be unpacked without any further information.
Therefore, each object that a delta depends upon must be present within the pack.A pack index file
.idxis generated for fast, random access to the objects in the pack.Placing both the index file
.idxand the packed archive.packin thepacksubdirectory of$GIT_OBJECT_DIRECTORY(or any of the directories on$GIT_ALTERNATE_OBJECT_DIRECTORIES) enables Git to read from the pack archive.
When git pack your files it does it in a smart way so it will be very fast to extract data.
In order to achieve this git use pack-heuristics which is basically looking for similar part of content in your pack and storing them as single one, meaning - if you have the same header (License agreement for example) in many files, git will "find" it and will store it once.
Now all the files which include this license will contain pointer to the header code. In this case git doesn't have to store the same code over and over so the pack size is minimal.
This is one of the reasons why it's not a good idea and not recommended to store binary files in git since the chance of having similarity is very low so the pack size will not be optimal.
Git store your data in a zipped format to reduce space so again binary will not be optimal as well whcn zipped (size wize).
Here is a sample of the git blob using the zipped compression: