Digital doubles: In the future, virtual versions of ourselves could predict our behavior

A digital twin is a copy of a person, product or process created using data. It may sound like science fiction, but some have claimed that you’ll likely have a digital twin within the next decade. As a copy of a person, a digital twin would – ideally – make the same decisions you would make if presented with the same documents. This may seem like another speculative claim by futurists. But it’s much more possible than people would like to believe. While we might tend to assume that we are special and unique, given enough information, artificial intelligence (AI) can make many inferences about our personalities, social behaviors and purchasing decisions.

The era of big data means that vast amounts of information (called “data lakes”) are collected about your manifest attitudes and preferences as well as the behavioral traces you leave behind. Equally shocking is the extent to which organizations collect our data. In 2019, The Walt Disney Company acquired Hulu, a company that journalists and advocates said had a questionable record when it came to data collection. Seemingly benign phone apps — like those used to order coffee — can collect large amounts from users every few minutes. The Cambridge Analytica scandal exemplifies these worries, with users and regulators worrying about someone’s ability to identify, predict and change their behavior. But how worried should we be? High or low fidelity In simulation studies, fidelity refers to the degree to which a copy, or model, matches its target. Simulator fidelity refers to the degree of realism of a simulation relative to real-world benchmarks. For example, a racing video game provides an image that increases and decreases in speed as we press keys on a keyboard or gamepad. While a driving simulator may have a windshield, chassis, gear lever, and accelerator and brake pedals, a video game has a lower degree of fidelity than the driving simulator. A digital twin requires a high degree of fidelity that would be able to integrate real-time and real-world information: if it’s raining outside now, it will rain in the simulator.

In industry, digital twins can have radical implications. If we can model a system of human-machine interaction, we can allocate resources, anticipate shortages and breakdowns, and make projections.

A human digital twin would incorporate large amounts of data about a person’s preferences, biases and behaviors, and be able to have information about a user’s immediate physical and social environment to make predictions.

These requirements mean that achieving a true digital twin is a distant possibility soon. The number of sensors needed to accumulate the data and the processing capacity needed to maintain a virtual model of the user would be enormous. At present, the developers are content with a low-fidelity model.

Ethical Issues The production of a digital twin raises social and ethical issues regarding data integrity, the accuracy of a model’s predictions, the monitoring capabilities needed to create and update a digital twin, ownership and access to a digital twin.

British Prime Minister Benjamin Disraeli is often quoted as saying, “There are three kinds of lies: lies, damn lies and statistics”, implying that numbers are unreliable. The data collected about us relies on the collection and analysis of statistics about our behaviors and habits to make predictions about how we would behave in given situations.

This sentiment reflects a misunderstanding about how statisticians collect and interpret data, but it raises an important concern.

One of the most important ethical issues with a digital twin concerns quantitative error, which assumes that numbers have an objective meaning separate from their context. When we look at numbers, we often forget that they have specific meanings that come from the measuring instruments used to collect them. And a measuring instrument may work in one context but not in another.

When collecting and using data, we must recognize that the selection includes some features and not others. Often this selection is made for convenience or because of the practical limitations of technology. We must criticize any claims based on data and artificial intelligence, because design decisions are not accessible to us. We need to understand how the data has been collected, processed, used and presented.

Power Imbalances Power imbalance is a growing debate in the public regarding data, privacy and surveillance. On a smaller scale, this can produce or increase digital divides, that is, the gap between those who have access to digital technologies and those who do not. On a larger scale, it threatens a new colonialism based on access to and control of information and technology.

Even the creation of low-fidelity digital twins offers the possibility of monitoring users, making inferences about their behavior, attempting to influence them and representing them to others.

While this can be useful in health or education settings, failure to give users the ability to access and evaluate their data can threaten individual autonomy and the collective good of the community. society.

Affected individuals do not have access to the same resources as large corporations and governments. They lack time, training and perhaps motivation. Consistent and independent monitoring is necessary to ensure that our digital rights are preserved.

(This story has not been edited by the Devdiscourse team and is auto-generated from a syndicated feed.)

About Bernice D. Brewer

Check Also

‘He’s holding back his demons’: Fans react to Jake Paul and Andrew Tate’s second showdown on Floyd Mayweather’s card

Even since pillorying Andrew Tate on social media forums with his brother Logan, Jake Paul …