Tesla’s new self-driving software throws out its old code entirely

The update, modestly named “v12.3,” is in reality a totally different approach to teaching cars to drive themselves.

Tesla, the pioneering electric vehicle manufacturer known for its innovative and sometimes controversial approach to self-driving technology, has recently made a significant change to its Full Self-Driving (FSD) software. The company, led by Elon Musk, has decided to completely overhaul its existing FSD code, opting for a novel approach in the months leading up to the planned launch of its Robotaxi service. 

The move scraps the legacy code and replaces it with a new system that leverages neural networks to learn and adapt to real-world driving scenarios, a notable shift from the previous rule-based algorithms.

Changing Everything

The legacy FSD software relied on a complex set of predefined rules and conditions to navigate vehicles through various driving situations. While this approach achieved a certain level of autonomy, it often struggled to handle the myriad edge cases and unpredictable situations that drivers encounter in the real world.

In contrast, Tesla’s new FSD system employs deep learning techniques, allowing the vehicle to learn how to drive from vast amounts of data collected from real-world driving experiences. By exposing the neural networks to a wide array of scenarios, the system can develop a more nuanced understanding of its environment and adapt its behavior accordingly.

This shift towards a neural network-based approach offers several potential benefits. The system’s ability to learn and improve over time without requiring manual updates to the underlying algorithms provides greater flexibility and adaptability. As the software encounters more diverse driving situations, it can continually refine its decision-making processes.

However, the transition to a neural network-based system also presents new challenges. Ensuring the transparency and interpretability of the system’s decision-making is crucial for building trust and confidence in the technology. Regulators, safety advocates, and consumers alike may want a clear understanding of how the software arrives at its decisions, especially in the event of an accident or malfunction.

The new system is receiving mixed reviews online. CEO Michael Dell praised it on X, saying, “Super impressive, Tesla FSD v12.3 is. Like a human driver, it is.” However, some Reddit users had a different experience: “I had my first experience with it, and it felt like having a 15-year-old driver. It’s a very cool technical demo but I feel like they have a long way to go to achieve their initial goal… like at least five more years until they can even argue it’s lvl 4 or 5.”

Tesla’s History with Safety and Autonomy

Tesla’s journey towards autonomous driving began in 2014 with the introduction of its Autopilot system, which initially offered features such as adaptive cruise control and lane keeping assist. Over the years, the company has continuously improved and expanded the capabilities of Autopilot, with the ultimate goal of achieving complete self-driving.

The path to autonomy has not been without its challenges and controversies. Tesla has faced criticism for its aggressive marketing of Autopilot and FSD features, with some arguing that the names themselves can mislead consumers into overestimating the system’s capabilities. There have also been several high-profile accidents involving Tesla vehicles operating on Autopilot, raising concerns about the safety and reliability of the technology.

“Drivers have always been required to provide active supervision. The new name emphasizes this fact.”

Pratik Chaudhari

In response to these incidents, the National Highway Traffic Safety Administration (NHTSA) has launched investigations into Tesla’s Autopilot system. The agency has repeatedly called on the company to address safety concerns and provide more transparent information to consumers about the limitations of its semi-autonomous features. In a statement, the NHTSA emphasized the importance of “ensuring that vehicles with automated driving systems operate safely and as intended, which is essential for maintaining public trust and confidence in these technologies.”

Tesla’s decision to completely overhaul its FSD software has garnered mixed reactions from industry experts and safety advocates. Some have expressed concerns about the potential risks associated with implementing a new technology on such a large scale, particularly given Tesla’s history of unfulfilled promises related to its autonomous driving capabilities.

However, Tesla’s recent decision to rebrand its “Full Self-Driving” suite to “Supervised Full Self-Driving” reflects a more accurate description of the system’s current capabilities, experts say. Pratik Chaudhari, an engineering professor at the University of Pennsylvania with extensive experience in self-driving cars — including being part of the team at NuTonomy (now Hyundai-Aptiv Motional) that demonstrated the world’s first autonomous taxi service in 2016 — explained, “Drivers have always been required to provide active supervision. The new name emphasizes this fact.”

Challenges in Achieving Full Autonomy

Chaudhari highlighted the limitations of current self-driving technology, stating, “There are still regular incidents where Teslas, and assistive autonomous cars by other car makers, have behaved in an unsafe fashion. The driver is expected to be alert and intervene in case of such events.” He emphasized the challenges in handling unpredictable human behavior and the difficulty in ensuring a car is “99.99% safe,” due to the vast diversity of situations that can occur on the roads.

One of the primary challenges in achieving full autonomy lies in the ability of self-driving systems to handle edge cases and unexpected situations. While machine learning and computer vision have made significant strides in recent years, there is still a long way to go before these technologies can match the adaptability and decision-making capabilities of human drivers.

Companies in the self-driving space are employing various strategies to ensure the safety and reliability of their systems. Rigorous, extensive real-world testing is crucial to expose the software to a wide array of scenarios. However, as Chaudhari explained, “It is very difficult to say that a car is 99.99% safe because there is a large diversity of situations that can occur — we will never ever be able to see them all. Therefore, we will never be able to test against them.”

Alternative Approaches to Self-Driving Technology

While Tesla’s approach to self-driving relies heavily on visible light cameras and neural network-based software, other companies in the industry are taking different paths. Shawn Taikratoke, CEO of autonomous mobility startup Mozee, seemed generally impressed by Tesla’s decision to revamp its FSD software: “Tesla’s bold decision to totally rebuild their Full Self-Driving suite reflects a culture that prioritizes quick innovation and adaptability — qualities that are critical as they work toward ambitious targets like deploying robotaxis.”

However, Mozee’s approach to self-driving differs from Tesla’s, utilizing a broader array of sensors, including radar and infrared, to ensure optimal reliability across diverse environments. Taikratoke explained, “Our diverse sensor approach guarantees that our vehicles can function reliably in a wide range of environments, from the planned pathways of university campuses to the unpredictable streets of metropolitan centers. By maintaining this flexible and thorough approach, we ensure that our technology is adaptable and scalable, reflecting our customers’ different demands and the surroundings in which they operate.”

Mozee believes the success of autonomous technologies will ultimately depend on achieving seamless interaction between vehicle-to-vehicle and vehicle-to-infrastructure communications, resulting in a fully integrated network that enhances safety and efficiency on a large scale. Taikratoke emphasized the importance of collaboration and partnerships in driving this technological revolution forward: “As we grow, this adaptability will be critical to not only meeting but exceeding the expectations of our partners and the communities we serve. We are excited to work alongside industry leaders like Tesla to shape the future of transportation and create a safer, more efficient world for all.”

As the industry continues to evolve, companies like Tesla, Mozee, and Waymo are at the forefront of developing adaptable, scalable solutions that prioritize safety, efficiency, and real-world applicability. While Tesla’s decision to totally overhaul its FSD software has raised concerns, it also reflects the company’s commitment to innovation and its willingness to adapt to the challenges of creating truly autonomous vehicles.

“[Safe self-driving] requires technological progress, as well as progress on policy and infrastructure.”

Pratik Chaudhari

The ultimate realization of fully autonomous vehicles will require not only further technological breakthroughs but also progress on the regulatory front and in terms of public acceptance. Governments and regulatory bodies will need to establish clear guidelines and standards for the development, testing, and deployment of self-driving vehicles. Public trust in the technology will also be crucial, as consumers must feel confident in the safety and reliability of autonomous systems before they can be widely adopted.

In a 2020 interview, Elon Musk expressed his optimism about the future of self-driving technology, stating, “I am extremely confident that we will have the basic functionality for level 5 autonomy complete this year.” However, he also acknowledged the challenges, adding, “There are many small problems that need to be solved, and then there’s the challenge of solving all those small problems and putting the whole system together.”

As the self-driving industry continues to evolve, collaboration between companies, researchers, and regulators will be essential to ensure the safe and responsible development of autonomous technologies. By working together to address the technological, legal, and societal challenges associated with self-driving vehicles, we can pave the way for a future in which autonomous transportation transforms the way we live, work, and travel. 

As Chaudhari noted, “Achieving safe driving is also a slow march of the 9s” — meaning, making the system 99.99% safe, then 99.999%, and so on, continually refining and solving edge cases. “It requires technological progress, as well as progress on policy and infrastructure.”

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Related
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Will AI supercharge hacking — if it hasn’t already?
The future of hacking is coming at us fast, and it isn’t clear yet whether AI will help attackers and defenders more.
No, LLMs still can’t reason like humans. This simple test reveals why.
Most AI models are incredible at taking tests but easily bamboozled by basic reasoning. “Simple Bench” shows us why.
The rise of the semi-autonomous car
A look back at the history of driving automation and the kinds of tech we can expect to see hitting the road in the coming years.
The future of fertility, from artificial wombs to AI-assisted IVF
A look back at the history of infertility treatments and ahead to the tech that could change everything we thought we knew about reproduction.
Up Next
Exit mobile version