Unix and Linux network configuration. Multiple network interfaces. Bridged NICs. High-availability network configurations.


Reviews of latest Unix and Linux software. Helpful tips for application support admins. Automating application support.


Disk partitioning, filesystems, directories, and files. Volume management, logical volumes, HA filesystems. Backups and disaster recovery.


Distributed server monitoring. Server performance and capacity planning. Monitoring applications, network status and user activity.

Commands & Shells

Cool Unix shell commands and options. Command-line tools and application. Things every Unix sysadmin needs to know.

Home » Data, Featured

The Future of Spaghetti Code

Submitted by on August 18, 2018 – 4:20 pm

In his new book “The Future of Work: Robotics, AI, and Automation”1, Darrell West of the Brookings Institution makes some very extravagant predictions. Here’s a short but entirely sufficient summary from the book’s presentation by Brennan Hoban:

AI is expected to be better equipped than humans to write a high school essay by 2026, drive a truck by 2027, work in retail by 2031, write a best-selling book by 2049, and perform surgery by 2053. There is a 50 percent chance AI will outperform all human tasks in 45 years and automate all human jobs in 120 years.2

I don’t know what high school West attended, but driving a truck or working in retail are exactly the jobs waiting for you if you suck at writing high school essays. Even today we have robotic vehicles capable of navigating city streets. They can even disregard bicyclists, just like we do. Forget about a bestseller or even a high school essay and show me any computer-generated text – even a couple of paragraphs would do – that is not complete gibberish.

AI technology today is based on old and well-known statistical analysis methods dumbed down and repackaged into convenient Python, Prolog, LISP, etc. machine and deep learning packages like SciKit-Learn; deep learning libraries like Keras; and suites of libraries for symbolic and statistical natural language processing like NLTK.

I am not saying these are very easy things to learn. What I am saying, however, is that you don’t need good knowledge of statistical analysis to make use of these tools. And that’s exactly their point: you put some data into one end and a prediction comes out of the other. For most programmers working on AI projects, what happens inside these libraries is a complete fucking mystery. They don’t understand how their software works. Not really.

Having said that, these machine learning libraries are not black box technology: everything’s open source. You can go and dig in its guts and you even may understand something. If you look at Keras, for example, the total number of contributors – four hundred-some – seems impressive. But most of those are token contributors and their understanding of the code requires some deep learning of its own. The number of people who actually understand how any of this works is tiny.

And then we need to think how all these libraries interact with each other. There is no master plan. There is nobody sitting there and testing every aspect of these interactions after every commit. Completely random shit can happen at any moment. And it does happen frequently.

For most programmers, however, it’s just about learning how to use these tools and not about trying to comprehend why they work the way they do. So, while this tech is not a black box, from a practical standpoint it totally is. The amount of useless, idiotic code being written at this very moment under the guise of AI is just mind-blowing. Ever seen a burst sewer pipe? Something like that.

Going back to Mr. West’s prediction of computer-generated bestsellers – there is absolutely no factual basis for his optimism. The tech for this does not exist. Not even as a concept. Thinking machines are as distant a nightmare as they were in Wright’s Automata trilogy a century ago. Today’s AI – even the most expertly-written stuff out there – does just one thing: predictive analytics.

Machine learning is inferior to statistical analysis, as it judges models strictly on their performance. This approach ignores things like model assumptions and diagnostics. Machine learning also oversimplifies the real world by assuming the samples it used to build the model are independent; identically distributed; come from a static data set; but somehow also representative of that data set.

Today’s AI is a primitive tool that will achieve none of the amazing developments foretold by Mr. West of the Brooking Institution. Except, maybe, for a self-driving truck. Until it t-bones its competition at some tricky intersection mistaking it for a plastic grocery bag caught in the wind. This constant building and rebuilding of arbitrary models until something seems to work for a while is a dead-end approach to AI that will never produce anything creative.

The bottom line of this yet another late-night rant (and I must be getting old: I’ve been complaining a lot as of late) is this: a machine learning model can tell you when the sun will rise and when the rooster will crow. But it doesn’t understand which event is the cause and which is the consequence. It can’t understand. Still, it will be near damn perfect. But when one day the roosters ends up in the soup, the entire model will be worthless and the sun will not rise.

There can be no bestseller or even a high school essay without comprehension. Unfortunately, there’s no Python library for that yet. Those slackers.

1 West, D. M. (2018). The future of work: Robots, AI, and automation. Washington, D.C.: Brookings Institution Press.
2 Hoban, B. (2018, May 23). Artificial intelligence will disrupt the future of work. Are we ready? Retrieved from

Print Friendly, PDF & Email

Leave a Reply