Federal laws and regulations are needed to prevent cyber attacks.  -  Pixabay

Federal laws and regulations are needed to prevent cyber attacks.

Pixabay

Editor's Note: In the second installment of our AI article series, we explore the potential for cybercrimes against car rental companies engaged in its use. Learn more about the Top 8 cybercrimes car rental companies should protect themselves against, the state of federal regulations for AI, and next steps.

As artificial intelligence (AI) advances and becomes more prevalent among rental car companies, it is likely the industry will see more cases involving its abuse or misuse.

There have already been some notable cases where AI was involved in computer crimes.

  • In 2019, the US Department of Justice charged a former engineer at Google with theft of trade secrets related to the company’s self-driving car technology. United States v. Levandowski, 3:19-cr-00477-WHA (N.D. Cal. Aug. 27, 2019). The engineer was accused of downloading thousands of confidential files related to the technology before leaving Google to start his own self-driving truck company. While not strictly an AI abuse case, this incident highlights the potential for insider threats and intellectual property theft in AI-related fields.
  • A group of hackers used AI to impersonate a CEO’s voice and request a fraudulent money transfer from a UK-based energy firm. While the hackers were ultimately caught and convicted under traditional hacking laws, the case underscores the growing sophistication of cybercriminals and the need for companies to take additional steps to protect themselves against AI-assisted attacks.

Although the above cases did not affect car rental companies, eventually these situations will enter the industry. Lobbying and awareness efforts with lawmakers, law enforcement, and the legal system to keep pace with these developments and adapt existing laws or create new ones to address AI-related crimes will become a relevant topic for the American Rental Car Association (ACRA) and its members as AI use becomes more prevalent.

8 Potential AI Attacks in the Rental Car Industry

As the rental car industry embraces AI, the potential hackers to use the technology in malicious ways will grow, as will the need for increased cybersecurity measures and adequate regulation to prevent such attacks.

Some cybersecurity attacks he rental car industry could face, include:

  • Credential Stuffing Attacks: One common type of attack against loyalty programs is credential stuffing, where attackers use automated tools to try different combinations of usernames and passwords until they find a match. With the help of AI, these attacks can be more sophisticated and targeted, using data from previous breaches to generate more accurate guesses for usernames and passwords.
  • Social Engineering Attacks: Another common tactic is social engineering, where attackers use deception to trick users into revealing their login credentials. With the help of AI, these attacks can be more convincing and personalized, using data from social media profiles or other online sources to craft more convincing phishing emails or fake login pages.
  • Loyalty Program Fraud: Finally, attackers may use AI to perpetrate loyalty program fraud, such as by generating fake accounts or using stolen reward points to make fraudulent purchases. With the help of machine learning algorithms, these attacks can be more difficult to detect and prevent, as they may appear to be legitimate transactions at first glance.
  • Exploitation of Rate Codes: Attackers could use AI to generate fake rate codes or to identify valid rate codes that offer discounts or other benefits, which they could then use to make fraudulent bookings or reservations.
  • Fraudulent Vehicle Bookings: Attackers could use AI to generate fake reservations for rental vehicles, either to use the vehicles or to sell them to others. This could involve creating fake driver’s licenses or other identification documents, as well as using stolen credit card information to pay for the rentals.
  • Corporate Discount Fraud: If a car rental company offers discounts to corporate customers, attackers could use AI to generate fake company names or employee IDs to fraudulently claim those discounts.
  • Fleet Rental Fraud: AI could create fake rental companies that claim to have a large fleet of vehicles available for rent, when in reality they do not. These fake companies could then make fraudulent bookings or to collect deposits or rental fees from unsuspecting customers.
  • Peer-to-Peer Car Rental Fraud: With the growing popularity of peer-to-peer car rental platforms like Turo and Getaround, attackers could use AI to create fake profiles or listings to trick users into renting non-existent vehicles or to steal personal information or payment details.

Federal laws and regulations are needed to address AI and cybersecurity to prevent these types of attacks.

The State of AI Laws and Regulation

An overarching federal law has not been put into place to address the intersection of AI and cybersecurity. However, several regulatory measures are being implemented at the federal and state level to handle these matters.

At the federal level, several organizations are working on the issue:

At the state level, several states have passed laws related to data privacy and cybersecurity. For example, California’s Consumer Privacy Act (CCPA) gives consumers the right to know what personal information businesses are collecting about them and to request that their data be deleted. Other states, such as New York and Massachusetts, have also passed laws requiring companies to implement certain cybersecurity measures and report data breaches in a timely manner.

Besides these efforts, there have been several bills introduced in Congress that would regulate AI and cybersecurity. For example, the Cybersecurity Disclosure Act of 2019 would require public companies to disclose information about their cybersecurity practices, while the Algorithmic Accountability Act of 2019 would require companies to assess and mitigate the risks of biased or discriminatory algorithms.

Public concern has moved some organizations and advocacy groups to call for the development of standards and best practices related to user-accessible AI. For example, the Partnership on AI, a coalition of companies and organizations focused on developing ethical AI, has issued a set of guidelines for user-centered AI design that emphasize transparency, explainability, and user control.

Future Uses and Protective Steps

AI technology — namely GPT (AI, ML, Semantic Search, and Neural Networks) are now available to the layman user, and thus to criminals — individuals, and organized groups, locally and abroad. This will lead to increased vulnerabilities within these systems.

Within the confines of the dark web and the deep web, ways to exploit vulnerabilities based on access to these powerful tools are being discussed, prepared, and implemented. The open nature GPT had at its beginnings, plus the access to connectors (API access) enabled the creation of specialized tools, including malware applications that could easily target companies and whole industries.

Because of these concerns, rental car companies are implementing identity verification and fraud detection tools, monitoring user behavior for signs of fraud, and providing regular security training for employees and customers. Some players work with cybersecurity experts and use AI-powered fraud detection tools to identify and mitigate potential vulnerabilities in their systems.

Still, technological efforts and existing legislation are proving insufficient and inadequate to protect against ongoing and future sophisticated attacks. In fact, back in March 2023, OpenIA, parent company of ChatGPT confirmed a breach of their servers, resulting in the leakage of personal information, as well as intelligence information built by GPT 3.5 and GPT 4. This breach was not a surprise to the industry as back on December 29, 2022, a thread labeled “ChatGPT — Benefits of Malware” was published in an underground forum, providing malware strains and techniques to hack and jailbreak ChatGPT, rendering its ethical programming void. These situations widely open the risk and liability companies face.

This dilemma will increase and raise more questions that must be addressed as more industry players experiment with AI tools.

For example, Hertz partnered with Ravin AI in November 2022 to run a pilot on vehicle inspection. During the 2023 International Car Rental Show, a handful of companies presented vehicle damage valuation services based on artificial intelligence. These approaches by the industry open the debate for questions on ethics and liability.

  • Would repair estimates be in line with car rental laws where it comes to vehicle repair, such as California CIV § 1939.03?
  • Would the usage of AI tools to inspect vehicles possibly generate negligent entrustment of a vehicle, and thus, void federal protections if the AI misses a possible risk and the vehicle is handed off to a consumer?

In another example, leading Peer-to-Peer operator Turo is using AI tools to optimize pricing, risk, and marketing efforts, thus decreasing fraud and ill-intentioned rental abuse. How do these actions affect a consumer’s protection under local, federal, or even offshore laws, such as the California Consumer Privacy Act or the EU General Data Protection Regulation?

As rental car companies speed toward AI, the entire industry could benefit from educating itself on the risks and vulnerabilities that public-accessible AI poses, taking a stance on the matter, and adding it to its legislative agenda.

Carlos Bazan is a business strategist focused on operational and compliance topics in the car rental industry. 

0 Comments