With a more serious tone, I am really hoping that the OL get the new train started already. I have been forgiving based on their reasons they gave. I even see an ironic gain where since the software issue doesn't stop delivery of more train sets, the release of the new trains will have a more impactful feel with the possibility of several rather than just like one trainset joining the fleet.
But patience can only hold up so long. The new trains are just sitting there. We are told no more than it is just software issues but no additional information that let us clue the amount of programming or QA that needs to be done left. We don't know what's failing. We've seen mention of a certification issue but nobody has exactly noted what kind certification is needed currently (I mean what does "internal certification" require exactly - how many ways to test and document results - unless it has been failing critical tests). When it's getting into the 9th month of testing, it's been past the point of just a few failed issues. At this point, they need to fix the hold issue or be honest with whatever issue that keeping the train being used.
In a few weeks, we either getting a very critical sigh of relief that is much need morale booster. Or I think we going to see a level of hostility and anger in our discourse that we haven't seen before. We've been irritated and annoyed by the MBTA's state for a long time, but I do feel public angst and discourse have really risen since the derailment.
Age of Software in Everything, Everywhere you turn and its consequences
But patience can only hold up so long. The new trains are just sitting there. We are told no more than it is just software issues but no additional information that let us clue the amount of programming or QA that needs to be done left.
Unfortunately we are in the leading edge of the transition from hardwired and mechanical systems which we have learned about over the centuries and which can be Exhaustively Tested -- to systems relying upon extremely complex real-time software -- Which by dint of their complexity --
Can Not be Exhaustively Tested
Take a look at the Boeing Max 737 which has been promised that it is really ready for passengers how many times in the past few months
A slight digression -- a couple of observations about system complexity and reliability:
Class One:
The system is mechanical or electrical in its intrinsic nature [even if there are embedded computational components within individual electro-mechanical modules]. The number of parts that can fail is finite and the result of a single component failing is in general bound-able. Though even in pure mechanical or electro-mechanical systems cascading failures can and do occur -- think bridges and building collapses, trains derailing because of bad wheels or rails, or old-style electric power outages caused by falling wires.
Class Two:
The system while having mechanical and electrical elements [for its external physical interactions] is primarily composed of hundreds of software modules each of which is in turn composed of other modules and ultimately millions of lines of code. The mechanical response of the system depends on the proper measurements being made by a large number of sensors and the resultant real-time digital information being properly processed and the results of all of this computation being used to control a large number of electro-mechanical actuators -- [very simple example: think the anti-lock brakes on a passenger car. The number of individual components while still finite is so massive -- that the number of combinations of interacting pairs of components becomes astronomical. Unfortunately, the possible modes of failure of any one of the enormous number of the individual components to respond correctly to a similarly massive range of sensor inputs [system stimuli] becomes multiplicatively enormous. Now because the interactions occur inside computer memory the potential for cross-connected failure modes [even in just pairs of components] is nearly uncountable. Testing is only able to explore a very small part of the system space with failures of minor software "parts" often bringing everything down. This includes the introduction of replacement modules [designed to fix known local problems] -- Think your experience with Microsoft Windows on any device.
"Don't turn off your PC the system is updating X_XX modules"
Today's systems of all types are rapidly becoming
All Class Two with implications for reliability and testing which we are not yet coming to grips both professionally and society-wise. Now we throw into this mix the exposure of the system to external interconnections and the corresponding external inputs [both accidental and intentional] -- think the
Internet of Everything aka
Hacker Heaven .
The T is replacing all of the old and reliable -- if getting "long in the tooth" -- Class One equipment with
sparkling, jazzy Class Two trains, switches, signaling, etc., -- all the replacement being motivated by the the promises of: improved efficiency, lower operational costs, easier maintenance, and increased capacity. However with all that comes the issues of testability and unpredictable failures due to interactions of myriads of lines of embedded software with the outputs of a multiplicity of sensors and external network stimuli.
The alternative is to go back to the PCC era of the Green Line when you could "smell the failing parts" as they heated up.