On Thu, Mar 13, 2008 at 3:03 PM, Sean Carolan scarolan@gmail.com wrote:
Checking in binary files into CVS or any repository control system is usually a broken thing. You want to either check in the stuff inside the tar ball seperately (if its going to change), or just copy it into the archive by updating CVSROOT/cvswrappers
This comes back to the point of my first post - I'm looking for an *easy* to manage system to keep track of one directory of files that are updated once in a while. We're not working on a huge code base with multiple branches, etc. I suppose we can check in the files inside the .tar.gz separately but was hoping to avoid that since the contents of this binary are maintained by a different department. I'd really rather keep it intact as it is.
Will SVN be better equipped to cope with large binaries? I don't understand why CVS chokes on a 1GB file when all it has to do is move it from one directory to another. I even gave this machine 3Gb of swap so it had 5Gb of total memory space available but it still dies when doing a cvs checkout.
Because these tools are meant to deal with source code files and deal with diffs of such files. You are cramming a 1 gigabyte of compressed bits at it and its trying to make sure it could give you a diff of it later on. I don't have any idea why you would want to store it in a CVS type tool if its not going to change that often.. a simple MD5sum of the tar ball and check against that would probably do a check of it.
Remember we have NO clue what you are wanting to do with this or why.. a couple of sentences does not give us enough background on what problem you are trying to solve. You never mentioned 1 gb tar balls or other things.. you mentioned config and script files which update quite frequently.. thats all.