Mitigating legal risk while accelerating the adoption of immersive technologies through machine learningMitigating legal risk while accelerating the adoption of immersive technologies through machine learning
Much of the initial development has focused on consumer-facing applications such as video games and entertainment, but companies increasingly recognize the power of immersive technologies to transform the business environment
January 22, 2021
The world of immersive technologies—encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR)—is at the forefront of the current trend of digital disruption.
These technologies create realistic digital landscapes that engage users in meaningful in-depth experiences and interactions.
Much of the initial development has focused on consumer-facing applications such as video games and entertainment, but companies increasingly recognize the power of immersive technologies to transform the business environment.
In the 2020 Augmented and Virtual Reality Survey conducted by Perkins Coie LLP, Boost VC, and the XR Association, nearly three-quarters of industry leaders polled indicated that they expect immersive technologies to be mainstream within the next five years.
This is consistent with the industry forecast that, in the next several years, business spending on AR and VR will surpass consumer spending. This article examines how machine learning is accelerating the adoption of immersive technologies and recommends ways businesses can mitigate the associated legal, compliance, and ethical risks.
How can machine learning accelerate the adoption of immersive technologies?
Machine learning enables applications to process data more efficiently and heighten the 3D experience by selectively rendering only certain portions of a user’s field of vision or by strategically compressing images. Faster data transmission over wireless connections, without noticeable quality loss, contributes to a smooth and successful user experience.
To improve content quality and personalization, predictive algorithms can map functionality to a user’s behavior in real time. Machine learning models capture data from a user’s repeated interactions with an application and then tailor the experience to the user’s preferences and capabilities. These capabilities are particularly relevant for employee training and development, identified in the Survey as an area that businesses are most likely to focus on over the next 12 months to improve their day-to-day operations.
Applications that allow businesses to draw inferences from individual preferences across their user base can also accelerate the creation of personalized content. For example, a training model can respond to individualized needs for intricate tasks such as surgery or complex manufacturing assembly.
Machine learning is also becoming quicker and nimbler at reacting to user input, thus improving outcomes in hazardous or time-sensitive scenarios. Survey respondents cited accident prevention and reduction of assembly errors as issues they expect to be addressed by immersive technology within the next two years. Predictive models that identify hazards or danger in real time to augment a user’s reaction speed can help prevent loss of life or damage to property in high-risk situations. These capabilities could also be deployed in AR defense applications to predict and flag likely risks on the battlefield. Applications can be adjusted generally using a broad data set from multiple users, or individually based on a specific user’s reaction time or “blind spots.”
How can businesses address legal and ethical concerns?
Given the potential exposure to data privacy concerns, regulatory requirements, and ethical considerations examined in our companion article, we recommend that businesses take the following steps to minimize their risk.
Implement a “privacy by design” and “compliance by design” approach at the onset. Contemplate privacy and compliance risk at the beginning of the application build process, and design offerings with these challenges in mind. Not only is this a best practice, but it avoids costly rework later.
Avoid using personal data to train machine learning algorithms, if possible. If the collection or use of personally identifiable information (“PII”) is not necessary to create the machine learning model, steer clear of PII to avoid onerous compliance obligations.
Audit the source of all data used to train AI. Obtain necessary consent from users, or otherwise have a lawful basis for using their PII—including biometric data—to train algorithms. If PII is obtained from third parties, secure appropriate contractual protections, such as indemnities and warranties, in case the third party did not follow applicable data collection compliance regulations.
Maintain policies that accurately disclose the categories of PII collected and the purposes for collection. Develop internal company policies for appropriate collection and use of user data. Externally, publish consumer privacy notices that are accurate, comprehensive, and compliant with all applicable laws, including those governing data privacy and unfair or deceptive trade practices. Policies should explain what, where, and for how long PII is stored.
Include any required contractual provisions regarding PII of residents of the European Union (EU). In the EU, the General Data Protection Regulation (GDPR) regulates data collection and processing. If a company that is regarded as a data controller engages a contractor to process the PII of EU residents on the company’s behalf, the contract must include provisions required by the GDPR to ensure adequate protection of the data.
Evaluate algorithms and data sets for any potential bias. Screen all data sets and machine learning algorithms for unrepresentative demographic information and discriminatory patterns, as these may inadvertently skew the functioning of the application.
Taking the necessary steps will help companies develop immersive technology applications driven by machine learning, while addressing the risks associated with collecting and using crucial data.
Gilbert Villaflor is a partner in Perkins Coie’s Technology Transactions & Privacy Law, Intellectual Property Law and Mergers & Acquisitions practice groups. Gilbert’s clients include well-known global companies driving innovation in software, cloud computing, machine learning, augmented reality, virtual reality, life sciences, and big data analytics. He can be reached at [email protected].
Christopher Wieman in an associate in Perkins Coie’s Technology Transactions & Privacy Law practice group and Interactive Entrainment Industry group. He counsels clients in the technology, food and beverage, gaming and AI industries, He can be reached at [email protected].
Megan Von Borstel is an associate in Perkins Coie’s Technology Transactions & Privacy Law and Privacy & Security Law practice groups. Her clients range from Fortune 500 companies to startups in the technology, adtech, AI, food and beverage, and retail industries. She can be reached at [email protected].