On 03/25/11 12:21 PM, Les Mikesell wrote: > So no one develops new applications there? This is a large scale manufacturing execution system. You don't just go off and design an all new system based on the buzzwords d'jour, when your factories are dependent on it. Picture large factory floors with dozens of assembly lines each with 100s of pieces of computer controlled industrial equipment, all developed by different vendors, many 5-10 years old because THEY STILL WORK, talking proprietary protocols to middleware layers of data concentrators, which in turn talk to a cluster of core databases that track everything going on, and then a maze of back end reporting systems, shipping systems, data warehousing extractors, realtime production analysis (ok, thats part of reporting), statistical error analysis and trend prediction (feeds back into the reporting databases), etc, etc. There's also subsystems that monitor the overall process flow and manipulate production workloads and product mix, etc etc. ALL this stuff would need replacing to work with a radically different core architecture. The last major upgrade of the core database architecture took 5 years to deploy in parallel with the previous system after 5 years of development (and it maintained backwards compatability with the floor/middleware side of things). All of our ongoing development work is evolutionary rather than revolutionary, new pieces have to be compatible with old pieces. We thought we could kill off the legacy support for some really old factory floor MSDOS based systems that used some truly ancient protocol APIs we'd developed over 15 years ago, and we discovered there's still a few 100 of those burn-in ovens running in some of the more remote factories, so we still need to handle the oddball data format they generate (yes, there's middleware layers that translate the really ancient into the merely antique). The physical factories are in a perpetual balance on the edge of chaos, if there's a problem on a line, work in progress gets manually moved off to other lines, events can arrive at the core database out of sequence due to network buffering delays yet we need to process them in order and still be able to produce accurate responses 1 second of realtime after the preceding event. Every phase of the data flow has resilience designed in.