As Jake and Sechs have alluded to, if people provide "fail-safes" to override the captain or pilot, all that's going to happen is that TV shows like Black Box or Air Crash Investigation are going to show episodes where the conclusion is software error rather than pilot error.
I write software for a living, lately java. The evangelists in the org I currently work for are big on junit testing--you write test cases for each class and method you write. The trouble I see with this is that I can write simple test cases for each piece of code, but it isn't the simple (blindingly obvious) that bugs my code. It's the weird corner cases where module A talks to module B via Module C and interacts with another application (Z) entirely which validates when where and what data I can get from application Y which allows my program to talk to module D, except when the planets are in alighment, and somebody forgot to fill out their birth date correctly with four digit years in application X. Then the whole lot crashes. It's when the unexpected happens. A good pilot may recover. An average pilot probably won't. Software can't think, so it will be no better than an average pilot.
There was a particularly useful air-crash show I saw a few months back about a Peruvian (Chilean??) aircraft that crashed. 'Twas a foggy night, and the aircraft had just taken off. The pilot started to receive an overspeed warning. So he slowed the plane (and started to descend). Then he received a stall warning! He tried to get confirmation from ATC. On top of that, he then got a ground proximity warning! So he was flying too fast, too slow and too close to the ground. Crash, bang 70 dead. The cause. Some ground crew had taped over a (pitot?) tube when washing the plane, and forgotten to take the tape off. Three people (including the Captain) missed it. How would software react in such conflicting circumstances?