Discussion:
slow cifs read on 3.12/3.10 kernel
Rolf Anderegg
2014-03-10 19:36:18 UTC
Permalink
In the course of evaluating realtime kernels for an Intel Atom based setup, I
ran across major read-speed problems on CIFS mounts when using a 3.12 kernel
(also tried 3.10, same issue). This resulted in transmissions @ <800KB/s
compared to >10MB when using 3.4 kernel's CIFS. At first I thought it had to do
with either one of these old buffer size related solutions:

https://bugzilla.samba.org/show_bug.cgi?id=7699
https://bugs.launchpad.net/ubuntu/+source/cifs-utils/+bug/810606
http://ubuntuforums.org/showthread.php?t=1578068

However, fiddling around with rsize/wsize, resp. CIFSMaxBufSize, did not change
anything.
Increasing the CIFS debug verbosity and dumping TCP traffic showed that in the
3.12 case, smaller packets are negotiated and transmitted which obviously
results in lower throughput. Here are my test logs:

Kernel 3.12.10-rt15 CIFS test log (slow speed):
http://7f42b4439bec450b.paste.se

Kernel 3.4.82-rt100 CIFS test log (normal speed):
http://863be082c9262448.paste.se

I'm all out of handles to crank, so before I dive further into the kernel's CIFS
code, I thought I'd call out for some expert help to check if this is a known issue.

Thanks in advance,

Rolf Anderegg
Steve French
2014-03-10 19:52:40 UTC
Permalink
There are some quick obvious things to check:
1) since server is Samba - check if unix extensions negotiated and
check default rsize
(you can simply do cat /proc/mounts to see what was negotiated)

2) with unix extensions enabled, the maximum read size (and write
size) is much larger (which usually should help) so check if
differences in rsize or wsize can explain performance differences.

3) Similarly, increasing the maximum number of simultaneous requests
that the server can support for each client can have an impact on
performance ("max mux = 50" is the default in the server's smb.conf
but it can be increased if your workload has many requests from one
client at the same time).

4) caching behavior changes - we moved to a much stricter caching
policy ("cache=strict") on later kernels so mounting with
"cache=loose," and allowing more efficient client side write caching
which is usually sufficient for most workloads, may also help.
Post by Rolf Anderegg
In the course of evaluating realtime kernels for an Intel Atom based setup, I
ran across major read-speed problems on CIFS mounts when using a 3.12 kernel
compared to >10MB when using 3.4 kernel's CIFS. At first I thought it had to do
https://bugzilla.samba.org/show_bug.cgi?id=7699
https://bugs.launchpad.net/ubuntu/+source/cifs-utils/+bug/810606
http://ubuntuforums.org/showthread.php?t=1578068
However, fiddling around with rsize/wsize, resp. CIFSMaxBufSize, did not change
anything.
Increasing the CIFS debug verbosity and dumping TCP traffic showed that in the
3.12 case, smaller packets are negotiated and transmitted which obviously
http://7f42b4439bec450b.paste.se
http://863be082c9262448.paste.se
I'm all out of handles to crank, so before I dive further into the kernel's CIFS
code, I thought I'd call out for some expert help to check if this is a known issue.
Thanks in advance,
Rolf Anderegg
--
To unsubscribe from this list: send the line "unsubscribe linux-cifs" in
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Thanks,

Steve
Rolf Anderegg
2014-03-11 17:42:28 UTC
Permalink
Post by Steve French
1) since server is Samba - check if unix extensions negotiated and
check default rsize
(you can simply do cat /proc/mounts to see what was negotiated)
2) with unix extensions enabled, the maximum read size (and write
size) is much larger (which usually should help) so check if
differences in rsize or wsize can explain performance differences.
3) Similarly, increasing the maximum number of simultaneous requests
that the server can support for each client can have an impact on
performance ("max mux = 50" is the default in the server's smb.conf
but it can be increased if your workload has many requests from one
client at the same time).
Thanks Steve for your helpful pointers.
The first three points had no effect in my particular case.
Post by Steve French
4) caching behavior changes - we moved to a much stricter caching
policy ("cache=strict") on later kernels so mounting with
"cache=loose," and allowing more efficient client side write caching
which is usually sufficient for most workloads, may also help.
The cache mode was the nail that hit it. The cache modes had a dramatic impact
on the CIFS mount read speed:

~29 MB/s with "cache=loose"
<800 kB/s with "cache=strict"

Having read the discussion that lead to this change in 3.7
(https://lists.samba.org/archive/samba-technical/2012-April/083228.html), I now
understand the reason for the default cache coherency strictness; albeit "a
little slower" is quite an understatement in my case.

Thanks,

Rolf
Jeff Layton
2014-03-11 20:28:58 UTC
Permalink
On Tue, 11 Mar 2014 18:42:28 +0100
Post by Rolf Anderegg
Post by Steve French
1) since server is Samba - check if unix extensions negotiated and
check default rsize
(you can simply do cat /proc/mounts to see what was negotiated)
2) with unix extensions enabled, the maximum read size (and write
size) is much larger (which usually should help) so check if
differences in rsize or wsize can explain performance differences.
3) Similarly, increasing the maximum number of simultaneous requests
that the server can support for each client can have an impact on
performance ("max mux = 50" is the default in the server's smb.conf
but it can be increased if your workload has many requests from one
client at the same time).
Thanks Steve for your helpful pointers.
The first three points had no effect in my particular case.
Post by Steve French
4) caching behavior changes - we moved to a much stricter caching
policy ("cache=strict") on later kernels so mounting with
"cache=loose," and allowing more efficient client side write caching
which is usually sufficient for most workloads, may also help.
The cache mode was the nail that hit it. The cache modes had a dramatic impact
~29 MB/s with "cache=loose"
<800 kB/s with "cache=strict"
Having read the discussion that lead to this change in 3.7
(https://lists.samba.org/archive/samba-technical/2012-April/083228.html), I now
understand the reason for the default cache coherency strictness; albeit "a
little slower" is quite an understatement in my case.
If cache=loose helps, then that suggests that you aren't getting
oplocks when you open files. That may or may not be expected depending
on the usage pattern, but that's probably where you should focus your
efforts. Do you have multiple machines or processes opening these files
at the same time?
--
Jeff Layton <jlayton-H+wXaHxf7aLQT0dZR+***@public.gmane.org>
Loading...