Are we the last generation of Network Engineers?

Automation is today a word that can be both scaring and exciting at the same time.

The scary part is often related to the question: will robots steal my job?

Federico Pistono says that's ok.

We read articles about how many jobs will not exist anymore in the near future, and sometimes how new jobs will be created to build and support automation and robots. The most advanced analysts try to predict how a society can survive with massive unemployment, UBI is often discussed as a possible solution.

The exciting part for me, as a Network Engineer or IT professional in general, is that I can do my job faster and automate the boring parts keeping my focus on the architecture and how I can contribute to the business. Are these the first steps of career shift? It may be.

I recently finished The glass cage by Nicholas Carr and I found some some very interesting points I'll discuss here.


Learning

If automation is used to bypass learning, it won't help to gain knowledge and skills.

At school we learn basic math doing operations manually, even if calculators have been available for decades. The learning requires inefficiency.

Most Network Engineers today still build networks by hand, as a craftsman creating an unique product.

This method is slow, error-prone and very subjective but allows the junior to reach an expert level through practice and repetition (and errors).

If you think about your career in IT, what are the moments when you learned more? If the answer is while troubleshooting we agree on a concept here:

"Problems produce friction in our lives, but friction can act as a catalyst, pushing us to a fuller awareness and deeper understanding of situations" (The Class Cage, Nicholas Carr

What if the next junior arrives to his new job and finds only automated tools to work with? A couple of experts prepare the automation tools and everyone else is just an operator for those tools, abstracted from the operations happening under the hood.

How can a novice advance to the expert level on the Dreyfus model if he can't practice?

"The more automated the machine, the less operators has to do" (Automation and management, James Bright, 1958)

The worker becomes a machinist and then a watchmen without opportunities to learn and grow advanced skills.

What if the cost of machines that think is people who don't?

Asked George Dyson in 2008. The question is still unanswered.

We can go back to 1996 to "The Social Construction of Reality" and maybe more with Alfred Schütz and his Phenomenology of the Social World to understand how knowledge is created and transmitted but the topic is too far from the scope of this post. Follow the links if you're interested.

Automation disaster

Air France learned the hard way how the misuse of automation can lead to a disaster, and since then pilots are required to manual flight planes to keep their skills sharp. (I strongly advice to read the whole article.

Can this be a solution for network engineers too? Maybe providing a lab environment or a [sandbox](https://en.wikipedia.org/wiki/Sandbox_(software_development) where juniors can play and do manually the tasks that are automated in production, so they can learn the actual protocols and challenges involved?

This sounds a viable option but I don't think it will work long therm. It is quite likely in future network devices will not have a CLI anymore, at least not for actual configuration, so this learning path will be precluded.

What will happen to experts then?

"a formerly experienced operator who has been monitoring an automated process may now be an inexperienced one." (Ironies of Automation, Lisanne Bainbridge, Department of Psychology, University College London, 1983)

Even if operators are expert, automatic systems erode their skills by removing the need for practice.Automation surprises will still happen and expert will be not experts anymore. No good.

What is the cure for imperfect automation?

Reading about self-driving cars it seems the biggest challenge is coexistence with human-driven cars. Humans cause most self-driving car accidents.

The proposed solution for this problem is total automation:

Will it work the same way for networks? Should we keep humans completely out of the path and be just operators of tools like Apstra to manage the devices?

Automation #FAIL

Can we assume automation is flawless? As any product created by humans we already know it will fail, and when automation fails it does it fast and on a scale.

But reaction to failure is different if it is caused by humans or by software.

It is somehow easier to accept an incident caused by an human error than the same accident caused by automation.

If/when the first self-driving car will kill somebody it is easy to expect that luddities will ask to remove all of them of the streets. A request totally unjustified, self-driving cars will more likely cause a drop of deaths by street accidents, because many security features like auto brakes will make them safer.

Note: many security features are not strictly related to the self driving cars only, they're becoming common on the average models.

I see two possible scenarios here, and the difference is based on the technology used to automate.

1. Tesla automation strategy

After the first Tesla crashes Elon Musk said:

the updated Autopilot would use the radar data to better assess the situation, so that the car would respond more accurately (source

This is the key point. After an human accident it is impossible to learn from that and improve the driving skills of current and future drivers. Tesla can do that, fixing and improving the software, and then updating all the cars.

We should ask our network automation vendors to do the same. No black boxes and full visibility of checks and improvements. And on top of that, no MVP but thoroughly tested products with safeguards to prevent incidents.

2. AI automation

I'm not an AI expert so I may be in error here but as I understand, AI outcome may not be always predictable and it may be impossible to understand what information or algorithm led to a particular decision.

Should we just trust the AI not to repeat the same error in the future? This sounds a bit scary to me. Probably I read too much SciFi with AI goind bad.

Are we the last generation of Network Engineers?

We can split the path to automation in three phases.

Phase 1 - past

  • automation reduces experts workload automating boring/repetitive tasks
  • enhancement of precision, speed, economics
  • many immediate advantages

Phase 2 - present

  • more automation
  • less trained people required to perform the same work
  • reduction of skills
  • reduction of costs
  • experts don't work on CLI anymore

Phase 3 - possible future?

  • experts not needed anymore in the Enterprise
  • ignorant operators
  • strong reliance on automation products

The answer to the question could be yes we are the last generation of network engineers. We can already see what's happening at Facebook, Amazon, Netflix and Google. Enterprise market will follow, it will take a lot of some time but:

The future is already here — it's just not very evenly distributed. (William Gibson

As Network Engineers we should embrace the new automation tools. We'll spend years or decades supporting the Enterprise's transition to automation and then happily retire while robots mow our lawns.

Juniors will mainly operate the automation tools, some of them will work for companies that create the tools, that's where all the fun will be.

DISCLAIMER

This post is just an educated guess, technology and society are changing so fast that futurist is now a full time job and most of them fail, I do not pretend to know how to do it better.