Hi, I encountered an interesting problem. We have a Java application on a samba server. The folder is then shared to the clients via a samba share. So far it works OK. Until now we had windows clients and everything worked OK. But now we are trying to Linux clients and this is where the fun starts. When a developer copies a new jar to the folder which is shared via samba. And if this copying is done by scp strange things start happening. After a few clicks the application stops working returning NoClassDeffFound, even if the file is there and readable. After that it is not enough to just stop the application, you have to unmount an then mount the share. I tried turning off oplocks in the samba configuration as suggested in the samba how-to, but it doesn't fix the problem. Has anyone seen a similar error?
BR
Hi,
On Fri, Sep 25, 2009 at 08:07, janezkosmr janezkosmr@volja.net wrote:
I encountered an interesting problem. We have a Java application on a samba server. The folder is then shared to the clients via a samba share. So far it works OK. Until now we had windows clients and everything worked OK. But now we are trying to Linux clients and this is where the fun starts. When a developer copies a new jar to the folder which is shared via samba. And if this copying is done by scp strange things start happening. After a few clicks the application stops working returning NoClassDeffFound, even if the file is there and readable. After that it is not enough to just stop the application, you have to unmount an then mount the share. I tried turning off oplocks in the samba configuration as suggested in the samba how-to, but it doesn't fix the problem. Has anyone seen a similar error?
I believe the problem is that you are rewriting the file on the server (with "scp") while it is open on the clients (using NFS? or mounted CIFS?). As the clients have the file open they will have parts of it cached, and those parts will be updated on the server but the cache will persist on the client for sometimes quite a long time, or until you unmount the network filesystem, which it seems is what you are currently doing to fix the issue.
I would suggest that you change the procedure to update the .jar file. Instead of uploading the file with the same name, which will overwrite it, you should upload the file with a new temporary name and when the transfer is finished rename the file to its definitive name (which will effectively replace the old file which will then be deleted). That way the file has the same name but a new i-node (in effect it's a different file). Clients that had the old file open will still be using it until they're done with it. Clients that read the new file at that point will get the new i-node number and any cache they had for the old file will not be used as it's for a different i-node.
I don't think you can upload files like this with scp, but I'm almost sure you can do it with "sftp" where you have a more complete command language including a "put" command that allows a different name at the remote side and a "rename" command to change the file name once the upload is finished. You can script the transfer with something like:
sftp myserver.example.com <<! put $localfilename $remotetmpname rename $remotetmpname $remotefinalname !
HTH, Filipe
On Fri, 2009-09-25 at 08:53 -0400, Filipe Brandenburger wrote:
Hi,
On Fri, Sep 25, 2009 at 08:07, janezkosmr janezkosmr@volja.net wrote:
I encountered an interesting problem. We have a Java application on a samba server. The folder is then shared to the clients via a samba share. So far it works OK. Until now we had windows clients and everything worked OK. But now we are trying to Linux clients and this is where the fun starts. When a developer copies a new jar to the folder which is shared via samba. And if this copying is done by scp strange things start happening. After a few clicks the application stops working returning NoClassDeffFound, even if the file is there and readable. After that it is not enough to just stop the application, you have to unmount an then mount the share. I tried turning off oplocks in the samba configuration as suggested in the samba how-to, but it doesn't fix the problem. Has anyone seen a similar error?
I believe the problem is that you are rewriting the file on the server (with "scp") while it is open on the clients (using NFS? or mounted CIFS?). As the clients have the file open they will have parts of it cached, and those parts will be updated on the server but the cache will persist on the client for sometimes quite a long time, or until you unmount the network filesystem, which it seems is what you are currently doing to fix the issue.
I would suggest that you change the procedure to update the .jar file. Instead of uploading the file with the same name, which will overwrite it, you should upload the file with a new temporary name and when the transfer is finished rename the file to its definitive name (which will effectively replace the old file which will then be deleted). That way the file has the same name but a new i-node (in effect it's a different file). Clients that had the old file open will still be using it until they're done with it. Clients that read the new file at that point will get the new i-node number and any cache they had for the old file will not be used as it's for a different i-node.
I don't think you can upload files like this with scp, but I'm almost sure you can do it with "sftp" where you have a more complete command language including a "put" command that allows a different name at the remote side and a "rename" command to change the file name once the upload is finished. You can script the transfer with something like:
sftp myserver.example.com <<! put $localfilename $remotetmpname rename $remotetmpname $remotefinalname !
------ Or if that does not work Push the files to the server through CIFS if on a local LAN. It could maybe the Devs machine has a lock on the file still, just maybe?
John
Maybe not exactly what you are experiencing, but I have experienced many strange CIFS issues from both CentOS and Windows systems communicating via wired ethernet.
I found specifically that there is a known issue between Windows Server 2003 and CentOS/Linux 5.x because of incompatibilities produced as a result of how M$ implements CIFS in the kernel. Microsoft will not address this according to what I read. I stumbled across this while trying to mount a Windows managed storage server from a CentOS server running Bacula. I auto-mounted the remote Windows share in fstab no problem but when transferring a lot of files or large files it crashed the Linux kernel on the CentOS running the Bacula Director. The fix was the run a remote storage system as CentOS. No more CIFS required. Funny how when Windows was removed from the formula there were no more problems. :-) Not sure this helps your exact sitch.
Larry Kemp Network Engineer U.S. Metropolitan Telecom, LLC Address: 24017 Production Circle Bonita Springs, FL 34135 Adtran ASP/ATSA Internetworking, ASP/ATSA IP Telephony, ASP/ATSA Wireless
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of JohnS Sent: Friday, September 25, 2009 10:19 AM To: CentOS mailing list Subject: Re: [CentOS] samba file locking
On Fri, 2009-09-25 at 08:53 -0400, Filipe Brandenburger wrote:
Hi,
On Fri, Sep 25, 2009 at 08:07, janezkosmr janezkosmr@volja.net wrote:
I encountered an interesting problem. We have a Java application on a samba server. The folder is then shared to the clients via a samba share. So far it works OK. Until now we had windows clients and everything worked OK. But now we are trying to Linux clients and this is where the fun starts. When a developer copies a new jar to the folder which is shared via samba. And if this copying is done by scp strange things start happening. After a few clicks the application stops working returning NoClassDeffFound, even if the file is there and readable. After that it is not enough to just stop the application, you have to unmount an then mount the share. I tried turning off oplocks in the samba configuration as suggested in the samba how-to, but it doesn't fix the problem. Has anyone seen a similar error?
I believe the problem is that you are rewriting the file on the server (with "scp") while it is open on the clients (using NFS? or mounted CIFS?). As the clients have the file open they will have parts of it cached, and those parts will be updated on the server but the cache will persist on the client for sometimes quite a long time, or until you unmount the network filesystem, which it seems is what you are currently doing to fix the issue.
I would suggest that you change the procedure to update the .jar file. Instead of uploading the file with the same name, which will overwrite it, you should upload the file with a new temporary name and when the transfer is finished rename the file to its definitive name (which will effectively replace the old file which will then be deleted). That way the file has the same name but a new i-node (in effect it's a different file). Clients that had the old file open will still be using it until they're done with it. Clients that read the new file at that point will get the new i-node number and any cache they had for the old file will not be used as it's for a different i-node.
I don't think you can upload files like this with scp, but I'm almost sure you can do it with "sftp" where you have a more complete command language including a "put" command that allows a different name at the remote side and a "rename" command to change the file name once the upload is finished. You can script the transfer with something like:
sftp myserver.example.com <<! put $localfilename $remotetmpname rename $remotetmpname $remotefinalname !
------ Or if that does not work Push the files to the server through CIFS if on a local LAN. It could maybe the Devs machine has a lock on the file still, just maybe?
John
_______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi,
On Fri, Sep 25, 2009 at 08:07, janezkosmr janezkosmr@volja.net wrote:
I encountered an interesting problem. We have a Java application on a samba server. The folder is then shared to the clients via a samba
<snip>
When a developer copies a new jar to the folder which is shared via samba. And if this copying is done by scp strange things start happening. After a few clicks the application stops working returning NoClassDeffFound, even if the file is there and readable. After that it
<snip>
I believe the problem is that you are rewriting the file on the server (with "scp") while it is open on the clients (using NFS? or mounted CIFS?). As the clients have the file open they will have parts of it cached, and those parts will be updated on the server but the cache will persist on the client for sometimes quite a long time, or until you unmount the network filesystem, which it seems is what you are currently doing to fix the issue.
I would suggest that you change the procedure to update the .jar file.
<snip> You're on linux. I'd agree with changing the procedure, but to scp <filename.datetimestamp> ln -s <filename.datetimestamp> <filename>
mark
Filipe Brandenburger wrote:
On Fri, Sep 25, 2009 at 08:07, janezkosmr janezkosmr@volja.net wrote:
I encountered an interesting problem. We have a Java application on a samba server. The folder is then shared to the clients via a samba share. So far it works OK. Until now we had windows clients and everything worked OK. But now we are trying to Linux clients and this is where the fun starts. When a developer copies a new jar to the folder which is shared via samba. And if this copying is done by scp strange things start happening. After a few clicks the application stops working returning NoClassDeffFound, even if the file is there and readable. After that it is not enough to just stop the application, you have to unmount an then mount the share. I tried turning off oplocks in the samba configuration as suggested in the samba how-to, but it doesn't fix the problem. Has anyone seen a similar error?
I believe the problem is that you are rewriting the file on the server (with "scp") while it is open on the clients (using NFS? or mounted CIFS?). As the clients have the file open they will have parts of it cached, and those parts will be updated on the server but the cache will persist on the client for sometimes quite a long time, or until you unmount the network filesystem, which it seems is what you are currently doing to fix the issue.
I would suggest that you change the procedure to update the .jar file. Instead of uploading the file with the same name, which will overwrite it, you should upload the file with a new temporary name and when the transfer is finished rename the file to its definitive name (which will effectively replace the old file which will then be deleted). That way the file has the same name but a new i-node (in effect it's a different file). Clients that had the old file open will still be using it until they're done with it. Clients that read the new file at that point will get the new i-node number and any cache they had for the old file will not be used as it's for a different i-node.
I don't think you can upload files like this with scp, but I'm almost sure you can do it with "sftp" where you have a more complete command language including a "put" command that allows a different name at the remote side and a "rename" command to change the file name once the upload is finished. You can script the transfer with something like:
sftp myserver.example.com <<! put $localfilename $remotetmpname rename $remotetmpname $remotefinalname !
Rsync normally creates the updated file under a tmp name and renames only when the transfer is complete. It has the advantage that if you repeat the tranfer with unchanged files it doesn't actually do it so you don't have to track which files in a directory need to be updated - and it should work over ssh anywhere you could use scp.
Les Mikesell wrote:
Rsync normally creates the updated file under a tmp name and renames only when the transfer is complete. It has the advantage that if you repeat the tranfer with unchanged files it doesn't actually do it so you don't have to track which files in a directory need to be updated - and it should work over ssh anywhere you could use scp.
I thought the default mode for rsync involved block checksumming and only sending blocks that changed, which get written in place. there's options which modify this behavior.
John R Pierce wrote:
Les Mikesell wrote:
Rsync normally creates the updated file under a tmp name and renames only when the transfer is complete. It has the advantage that if you repeat the tranfer with unchanged files it doesn't actually do it so you don't have to track which files in a directory need to be updated - and it should work over ssh anywhere you could use scp.
I thought the default mode for rsync involved block checksumming and only sending blocks that changed, which get written in place. there's options which modify this behavior.
No, the default creates a new file that is renamed only when complete. This is much healthier for files that may be in use - or transfers that fail. Only the differing blocks are transfered but a new complete copy is reassembled from the old file and changes.