BCLPSecCorpGov.com

Is your company approaching AI intelligently?

Is your company approaching AI intelligently?

What are companies saying in risk factors?

May 24, 2023
Download PDFDownload PDF
Print
Share

While new forms of artificial intelligence and machine-learning systems, or AI, have garnered headlines in the mainstream press, many companies are evaluating its use in their businesses.  Perhaps to a greater extent than with previous technological developments, companies need to consider the materiality of risks and uncertainties presented by AI. For example, Forbes recently identified the top 5 risks of generative AI that business leaders should watch out for: risk of disruption; cybersecurity risk; reputational risk; legal risk; and operational risk.

A number of companies have begun to address the implications of AI in recent 10-K and 10-Q filings, although still less than 10% of companies in the major indices (S&P 500 and Russell 3000).  Those that do represent a broad range of industries, such as vehicle automation, technology, biomedical-pharmaceutical, healthcare, software, retail, insurance, consumer finance/lending, banking, credit card/payment, asset management, online education, social media, gaming, hiring, workforce management, search engine, digital services, agriculture, data science, among others.

Some companies have included standalone risk factors, such as those set forth in the below examples; Meta's last 10-Q included this:

'We may not be successful in our artificial intelligence initiatives, which could adversely affect our business, reputation, or financial results.

We are making significant investments in artificial intelligence (AI) initiatives, including to recommend relevant unconnected content across our products, enhance our advertising tools, and develop new product features using generative AI. In particular, we expect our AI initiatives will require increased investment in infrastructure and headcount. AI technologies are complex and rapidly evolving, and we face significant competition from other companies as well as an evolving regulatory landscape. These efforts, including the introduction of new products or changes to existing products, may result in new or enhanced governmental or regulatory scrutiny, litigation, ethical concerns, or other complications that could adversely affect our business, reputation, or financial results. For example, the use of datasets to develop AI models, the content generated by AI systems, or the application of AI systems may be found to be insufficient, offensive, biased, or harmful, or violate current or future laws and regulations. In addition, market acceptance of AI technologies is uncertain, and we may be unsuccessful in our product development efforts. Any of these factors could adversely affect our business, reputation, or financial results.'

Other companies variously address AI as one of a number of factors contained in broader risk disclosures, including the following topical areas:

  • Uncertain success of new platforms or products incorporating AI
  • Increasing competition through introduction of new technologies, including AI, including disruptive applications that render a company’s products or services obsolete
  • Potential failures in the incorporation of AI into business systems or bugs, vulnerabilities or algorithmic flaws, including due to errors or inadequacies that are not easily detectable
  • Cybersecurity risks, including automated, targeted and coordinated attacks, and unauthorized use of AI tools that jeopardize the platform or operations or result in the unavailability of or unauthorized access to, misuse, acquisition, disclosure, loss, alteration or destruction of company or customer data, including PHI, PII or other confidential information about individuals
  • Potential legal or reputational harms due to insufficient or biased data or unintentional bias or discrimination through the use of AI or unauthorized use of AI tools; as well as any negative publicity or negative public perception of AI
  • Potential release of confidential or propriety information as a result of the use of AI-based software by employees, vendors, suppliers, contractors, consultants or other third-parties
  • Increased risks of cyberattacks or data breaches as a result of the use of AI to launch more automated, targeted and coordinated attacks, and the vulnerability of AI technology to cybersecurity threats
  • Ability to attract and retain employees with AI expertise, or to compete for talent using AI tools
  • Uncertainties in case law and regulations regarding intellectual property ownership and license rights, including copyright, of AI output, creating risks with respect to both the ability to adequately protect intellectual property underlying AI systems and software as well as inadvertent infringement
  • Potential need to change business practices to comply with U.S. and non-U.S. laws and regulations, including recent enactments of privacy laws and other potential laws and regulations relating to the use of AI in products or services by regulators, such as the FTC, individual states, such as California, or countries, and the adoption of industry guiding principles for use of AI, such as by the NAIC

As with any other risks, companies should include AI in their enterprise risk management systems, to the extent material, as well as their disclosure controls and procedures.

Examples of Standalone AI Risk Factors

Meta Platforms:

'We may not be successful in our artificial intelligence initiatives, which could adversely affect our business, reputation, or financial results.

We are making significant investments in artificial intelligence (AI) initiatives, including to recommend relevant unconnected content across our products, enhance our advertising tools, and develop new product features using generative AI. In particular, we expect our AI initiatives will require increased investment in infrastructure and headcount. AI technologies are complex and rapidly evolving, and we face significant competition from other companies as well as an evolving regulatory landscape. These efforts, including the introduction of new products or changes to existing products, may result in new or enhanced governmental or regulatory scrutiny, litigation, ethical concerns, or other complications that could adversely affect our business, reputation, or financial results. For example, the use of datasets to develop AI models, the content generated by AI systems, or the application of AI systems may be found to be insufficient, offensive, biased, or harmful, or violate current or future laws and regulations. In addition, market acceptance of AI technologies is uncertain, and we may be unsuccessful in our product development efforts. Any of these factors could adversely affect our business, reputation, or financial results.'

Doordash:

'We may use artificial intelligence in our business, and challenges with properly managing its use could result in reputational harm, competitive harm, and legal liability, and adversely affect our results of operations.

We may incorporate artificial intelligence (“AI”) solutions into our platform, offerings, services and features, and these applications may become important in our operations over time. Our competitors or other third parties may incorporate AI into their products more quickly or more successfully than us, which could impair our ability to compete effectively and adversely affect our results of operations. Additionally, if the content, analyses, or recommendations that AI applications assist in producing are or are alleged to be deficient, inaccurate, or biased, our business, financial condition, and results of operations may be adversely affected. The use of AI applications has resulted in, and may in the future result in, cybersecurity incidents that implicate the personal data of end users of such applications. Any such cybersecurity incidents related to our use of AI applications could adversely affect our reputation and results of operations. AI also presents emerging ethical issues and if our use of AI becomes controversial, we may experience brand or reputational harm, competitive harm, or legal liability. The rapid evolution of AI, including potential government regulation of AI, will require significant resources to develop, test and maintain our platform, offerings, services, and features to help us implement AI ethically in order to minimize unintended, harmful impact.'

Planet Labs:

'Issues in the use of artificial intelligence, including machine learning and computer vision (together, “AI”), in our geospatial data and analytics platforms may result in reputational harm or liability.

AI is enabled by or integrated into some of our geospatial data and analytics platforms and is a growing element of our business offerings. As with many developing technologies, AI presents risks and challenges that could affect its further development, adoption, and use, and therefore our business. AI algorithms may be flawed. Data sets may be insufficient, of poor quality, or contain biased information. Inappropriate or controversial data practices by data scientists, engineers, and end-users of our systems could impair the acceptance of AI solutions. If the analyses that AI applications assist in producing are deficient or inaccurate, we could be subjected to competitive harm, potential legal liability, and brand or reputational harm. Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their purported or real impact on our financial condition and operations or the financial condition and operations of our customers, we may experience competitive harm, legal liability and brand or reputational harm.'

Lemonade:

'Our proprietary artificial intelligence algorithms may not operate properly or as we expect them to, which could cause us to write policies we should not write, price those policies inappropriately or overpay claims that are made by our customers. Moreover, our proprietary artificial intelligence algorithms may lead to unintentional bias and discrimination.

We utilize the data gathered from the insurance application process to determine whether or not to write a particular policy and, if so, how to price that particular policy. Similarly, we use proprietary artificial intelligence algorithms to process many of our claims. The data that we gather through our interactions with our customers is evaluated and curated by proprietary artificial intelligence algorithms. The continuous development, maintenance and operation of our deep-learned backend data analytics engine is expensive and complex, and may involve unforeseen difficulties including material performance problems, undetected defects or errors, for example, with new capabilities incorporating artificial intelligence. We may encounter technical obstacles, and it is possible that we may discover additional problems that prevent our proprietary algorithms from operating properly. If our data analytics do not function reliably, we may incorrectly price insurance products for our customers or incorrectly pay or deny claims made by our customers. Either of these situations could result in customer dissatisfaction with us, which could cause customers to cancel their insurance policies with us, prevent prospective customers from obtaining new insurance policies, or cause us to underprice policies or overpay claims. Additionally, our proprietary artificial intelligence algorithms may lead to unintentional bias and discrimination in the underwriting process, which could subject us to legal or regulatory liability. State legislatures and insurance regulators have shown increasing concern about the use of artificial intelligence and the potential for discrimination in the underwriting process. For example, in 2022, both the California and Connecticut Departments of Insurance issued bulletins advising insurers of their obligations related to unfair discrimination when using big data and artificial intelligence. We cannot predict what, if any, limitations state legislatures and insurance regulators may place on the use of artificial intelligence. Any of these eventualities could result in a material and adverse effect on our business, results of operations and financial condition.'

Yext:

'We are incorporating generative artificial intelligence, or AI, into some of our products. This technology is new and developing and may present both compliance risks and reputational risks.

We have incorporated a number of generative AI features into our products. This technology, which is a new and emerging technology that is in its early stages of commercial use, presents a number of risks inherent in its use. AI algorithms are based on machine learning and predictive analytics, which can create unintended biases and discriminatory outcomes. We have implemented measures to address algorithmic bias, such as testing our algorithms and regularly reviewing our data sources. However, there is a risk that our algorithms could produce discriminatory or unexpected results or behaviors (e.g., hallucinatory behavior) that could harm our reputation, business, customers, or stakeholders. In addition, the use of AI involves significant technical complexity and requires specialized expertise. Any disruption or failure in our AI systems or infrastructure could result in delays or errors in our operations, which could harm our business and financial results.'

Ziprecruitor:

'Issues with the use of artificial intelligence (including machine learning) in our marketplace may result in reputational harm or liability, or could otherwise adversely affect our business.

Artificial intelligence, or AI, is enabled by or integrated into some of our marketplace and is a significant element of our business. As with many developing technologies, AI presents risks and challenges that could affect its further development, adoption, and use, and therefore our business. AI algorithms may be flawed. Datasets may be insufficient, of poor quality, or contain biased information. Inappropriate or controversial data practices by data scientists, engineers, and end-users of our systems or elsewhere (including the integration or use of third-party AI tools) could impair the acceptance of AI solutions and could result in burdensome new regulations that may limit our ability to use existing or new AI technologies. If the recommendations, forecasts, or analyses that AI applications assist in producing are deficient or inaccurate, we could be subject to competitive harm, potential legal liability, and brand or reputational harm. Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their purported or real impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm. In addition, we expect that there will continue to be new laws or regulations concerning the use of AI. It is possible that certain governments may seek to regulate, limit, or block the use of AI in our products and services or otherwise impose other restrictions that may affect or impair the usability or efficiency of our products and services for an extended period of time or indefinitely.'

Eventbrite:

'We are incorporating generative artificial intelligence, or AI, into some of our products. This technology is new and developing and may present operational and reputational risks.

We have incorporated a number of third-party generative AI features into our products. This technology, which is a new and emerging technology that is in its early stages of commercial use, presents a number of risks inherent in its use. AI algorithms are based on machine learning and predictive analytics, which can create accuracy issues, unintended biases and discriminatory outcomes. We have implemented measures, such as in-product disclosures, which inform creators when content is created for them by generative AI and that they are responsible for the accuracy and editorial review of their content. There is a risk that third-party generative AI algorithms could produce inaccurate or misleading content or other discriminatory or unexpected results or behaviors (e.g., AI hallucinatory behavior that can generate irrelevant, nonsensical or factually incorrect results) that could harm our reputation, business or customers. In addition, the use of AI involves significant technical complexity and requires specialized expertise. Any disruption or failure in our AI systems or infrastructure could result in delays or errors in our operations, which could harm our business and financial results.'

Biogen:

'The increasing use of social media platforms and artificial intelligence based software presents new risks and challenges.

Social media is increasingly being used to communicate about our products and the diseases our therapies are designed to treat. Social media practices in the biopharmaceutical industry continue to evolve and regulations relating to such use are not always clear and create uncertainty and risk of noncompliance with regulations applicable to our business. For example, patients may use social media channels to comment on the effectiveness of a product or to report an alleged adverse event. When such disclosures occur, there is a risk that we fail to monitor and comply with applicable adverse event reporting obligations or we may not be able to defend the company or the public's legitimate interests in the face of the political and market pressures generated by social media due to restrictions on what we may say about our products. There is also a risk of inappropriate disclosure of sensitive information or negative or inaccurate posts or comments about us on social media. We may also encounter criticism on social media regarding our company, management, product candidates or products. The immediacy of social media precludes us from having real-time control over postings made regarding us via social media, whether matters of fact or opinion. Our reputation could be damaged by negative publicity or if adverse information concerning us is posted on social media platforms or similar mediums, which we may not be able to reverse. If any of these events were to occur or we otherwise fail to comply with applicable regulations, we could incur liability, face restrictive regulatory actions or incur other harm to our business. Additionally, the use of artificial intelligence (AI) based software is increasingly being used in the biopharmaceutical industry. Use of AI based software may lead to the release of confidential proprietary information which may impact our ability to realize the benefit of our intellectual property.'

Flexshopper:

'If we are unable to continue to improve our artificial intelligence (“AI”) models or if our AI models contain errors or are otherwise ineffective, our growth prospects, business, financial condition and results of operations would be adversely affected. 

Our ability to attract customers to our platform and increase the number of loans facilitated on our platform will depend in large part on our ability to effectively evaluate a borrower’s creditworthiness and likelihood of default and, based on that evaluation, offer competitively priced leases and loans and higher approval rates. Further, our overall operating efficiency and margins will depend in large part on our ability to maintain a high degree of automation in the loan application process and achieve incremental improvements in the degree of automation. If our models fail to adequately predict the creditworthiness of borrowers due to the design of our models or programming or other errors, and our models do not detect and account for such errors, or any of the other components of our credit decision process fails, we and our bank partner may experience higher than forecasted losses. Any of the foregoing could result in sub-optimally priced leases and loans, incorrect approvals or denials of leases and loans, or higher than expected lease and loan losses, which in turn could adversely affect our ability to attract new borrowers and bank partner to our platform, increase the number of leases and loans facilitated on our platform or maintain or increase the average size of leases and loans facilitated on our platform. Our models also target and optimize other aspects of the lending process, such as borrower acquisition cost, fraud detection, and stacking. However, such applications of our models may prove to be less predictive than we expect, or than they have been in the past, for a variety of reasons, including inaccurate assumptions or other errors made in constructing such models, incorrect interpretations of the results of such models and failure to timely update model assumptions and parameters. Additionally, such models may not be able to effectively account for matters that are inherently difficult to predict and beyond our control, such as macroeconomic conditions, credit market volatility and interest rate fluctuations, which often involve complex interactions between several dependent and independent variables and factors. Material errors or inaccuracies in such models could lead us to make inaccurate or sub-optimal operational or strategic decisions, which could adversely affect our business, financial condition, and results of operations. Additionally, errors or inaccuracies in our models could result in any person exposed to the credit risk of loans facilitated on our platform, whether it be us, our bank partner or our sources of capital, experiencing higher than expected losses or lower than desired returns, which could impair our ability to retain existing or attract new bank partner and sources of capital, reduce the number, or limit the types, of loans bank partner and sources of capital are willing to fund, and limit our ability to increase commitments under our credit facilities. Any of these circumstances could reduce the number of loans facilitated on the platform and harm our ability to maintain diverse and robust sources of capital and could adversely affect our business, financial condition and results of operations.'

Aeye:

'If our deterministic artificial intelligence-driven sensing system is not selected for inclusion in ADAS technology by automotive OEMs or their suppliers, our business will be materially and adversely affected.

Automotive OEMs and their suppliers design and develop ADAS technology over several years. These automotive OEMs and suppliers undertake extensive testing or qualification processes prior to placing orders for large quantities of products, such as our active lidar products, because such products will function as part of a larger system or platform and must meet specifications that we do not control or dictate. We have spent, and will continue to spend, significant time and resources to have our products selected by automotive OEMs and their suppliers, which we refer to as a “design win.” In the case of autonomous driving and ADAS technology, a design win means our active lidar product has been selected for use in a particular vehicle model or models. If we do not achieve a design win with respect to a particular vehicle model, we may not have an opportunity to supply our products to the automotive OEM or its supplier for that vehicle model for a period of many years. In many cases, this period can be as long as five to seven years (or more). If our products are not selected by an automotive OEM or our suppliers for one vehicle model or if our products are not successful in that vehicle model, it is less likely that our product will be deployed in other vehicle models of that automotive OEM. If we fail to obtain design wins for a significant number of vehicle models from one or more automotive OEMs or their suppliers, our business, results of operations, and financial condition will be materially and adversely affected. Our business model for the Automotive market is based on our relationships with Tier 1 suppliers. If these relationships do not materialize, automotive OEMs may be less inclined to select our products for use in their vehicle models. The period of time from a design win to implementation is long and we are subject to the risks of cancellation or postponement of the contract or unsuccessful implementation.'

'Although we believe that lidar is an essential technology for autonomous vehicles and other emerging applications, market adoption of lidar is uncertain. If market adoption of lidar does not continue to develop, or adoption is deferred, or otherwise develops more slowly than we expect, our business will be adversely affected.

While our artificial intelligence-driven lidar-based sensing system can be applied to different use cases across end markets, approximately 81% and 58% of our revenue during the three months ended March 31, 2023 and 2022, respectively, was generated from automotive applications with a few customers in the aerospace, delivery, shuttle, railway, mining, and aviation sectors. Despite the fact that the automotive industry has expended considerable effort to research and test lidar products for ADAS and autonomous driving applications, the automotive industry may not introduce lidar products in commercially available vehicles on a timeframe that matches our expectations, or at all. We continually study emerging and competing sensing technologies and methodologies and we may incorporate new sensing technologies to our product portfolio over time. However, lidar products remain relatively new and it is possible that other sensing modalities, or a new disruptive modality based on new or existing technologies, including a combination of technologies, will achieve acceptance or leadership in the ADAS and autonomous driving space. Even if lidar products are used in initial generations of autonomous driving technology and ADAS products, we cannot guarantee that lidar products will be designed into or included in subsequent generations of such commercialized technology. In addition, we expect that initial generations of autonomous vehicles will be focused on limited applications, such as robo-taxis and shuttles, and that mass market adoption of autonomous technology may lag significantly behind these initial applications. The speed of market adoption and growth for ADAS or autonomous vehicles is difficult, if not impossible, to predict, and it is more difficult to predict this market’s future growth in light of the economic consequences of the lingering effects of the COVID-19 pandemic and other macroeconomic factors. Although we currently believe we have a differentiated market leading technology for the autonomous vehicle market, by the time mass market adoption of autonomous vehicle technology is achieved, we expect competition among providers of sensing technology based on lidar and other modalities to increase substantially. If, by the time autonomous vehicle technology achieves mass market adoption, commercialization of lidar products is not successful, or not as successful as we or the market expects, or if other sensing modalities gain acceptance by developers of ADAS products, automotive OEMs, regulators, safety organizations, or other market participants, our business, results of operations, and financial condition will be materially and adversely affected.

We are investing in and pursuing market opportunities outside of the Automotive market, including in the aerospace and defense, shuttle, delivery vehicle, drone, railway, intelligent transport, and mining sectors. We believe that our future revenue growth, if any, will depend in part on our ability to expand within new markets such as these and to enter new markets as they emerge. Each of these markets presents distinct risks and, in many cases, requires that we address the particular requirements of that market.

Addressing these requirements can be time-consuming and costly. The market for lidar technology is relatively new, rapidly developing, and unproven in many markets or industries. Many of our prospective customers are still in the testing and development phases and we cannot be certain that they will commercialize products or systems with our lidar products, or at all. We cannot be certain that lidar will be sold into these markets, or that lidar will be sold into any markets at scale. Adoption of lidar products, including our products, will depend on numerous factors, including whether the technological capabilities of lidar and lidar-based products meet users’ current or anticipated needs, whether the benefits associated with designing lidar into larger sensing systems outweighs the costs, complexity, and time needed to deploy such technology or replace or modify existing systems that may have used other modalities, such as cameras and radar, whether users in other applications can move beyond the testing and development phases and proceed to commercializing systems supported by lidar technology and whether lidar developers such as us can keep pace with the expected rapid technological change in certain developing markets, and the global response to the lingering effects of the COVID-19 pandemic, and other macroeconomic factors, and the length of any associated economic recovery. If lidar technology does not achieve commercial success, or if adoption of lidar is deferred or the market otherwise develops at a pace slower than we expect, our business, results of operations, and financial condition will be materially and adversely affected.'

'We may face risks associated with our reliance on certain artificial intelligence and machine learning models.

We rely on artificial intelligence and machine learning models in the development of our solutions for vehicle autonomy, ADAS, and industrial applications. The models that we use are developed or trained using various data sets. If the models are incorrectly designed, the data we use to train them is incomplete, inadequate, or biased in some way, or if we do not have sufficient rights to use the data on which our models rely, the performance of our products, services, and business, as well as our reputation, could suffer or we could incur liability through the violation of laws, third-party privacy, or other rights, or contracts to which we are a party.'

Meet The Team

+1 314 259 2149

Meet The Team

+1 314 259 2149

Meet The Team

+1 314 259 2149
This material is not comprehensive, is for informational purposes only, and is not legal advice. Your use or receipt of this material does not create an attorney-client relationship between us. If you require legal advice, you should consult an attorney regarding your particular circumstances. The choice of a lawyer is an important decision and should not be based solely upon advertisements. This material may be “Attorney Advertising” under the ethics and professional rules of certain jurisdictions. For advertising purposes, St. Louis, Missouri, is designated BCLP’s principal office and Kathrine Dixon (kathrine.dixon@bclplaw.com) as the responsible attorney.