On 08/30/2015 06:39 PM, Jason Warr wrote: > On 8/30/2015 4:59 AM, Adrian Sevcenco wrote: >> On 08/30/2015 12:02 PM, Mike Mohr wrote: >>> such hardware, but expect the throughput to fall through the >>> floor if you use such hardware. >> why? what is the difference between the silicon from a HBA card and >> the same silicon on motherboard? > I'm sure he's referring to what is essentially lane sharing. A SAS > expander in many ways is like an ethernet switch. You have 8 lanes > coming off your SAS3008, 4 each in the SFF8087 connector or 8 > individual SATA like sockets on the motherboard. You can plug any > number of these into a host port on a SAS expander and you then have > n*6Gbit of bandwidth to the expander from the host. Then you plug > targets and/or additional expanders into the downstream ports. > Everything on the downstream ports has to share the bandwidth so you > can run into a wall if you try to push to much bandwidth to to many > devices at once. In practice though it is not usually a problem with > a 2-3x over-subscription of lanes with HDD's. You will see it though > if you are really pushing allot of SSD's. yeah, but i am not (6 gbps for 24 hdds is ok for me) .. so i would to get back to my original problem/question : why would the mb integrated lsi 3008 not support sas cascaded (through sas backplanes) more than 8/16 devices (as the OEM technical support said) when 3008 specifications says that it could? Did anyone tried to use sas cascaded storage with some integrated in the motherboard sas chip? Well, in the end i will take a chance and buy 2 such servers and it will end in 2 possible ways : either bad mouthing OEM for the implementation or their technical support :) Thank you, Adrian