keyboard_computer_1927001b

Algorithms have been all over the news in recent months, as social-media companies use the mathematical formulas to combat “hate speech,” controlling the distribution of news stories as they target us with ads based on our web-surfing habits.

Generally, the formulas are secrets protected more securely than the gold at Fort Knox.

But a new poll by Pew Research finds people either don’t trust them at all or don’t trust them in a wide array of circumstances.

The research organization Friday said algorithms “are all around us, utilizing massive stores of data and complex analytics to make decisions with often significant impacts.”

For example, they “recommend books and movies for us to reach and watch, surface news stories they think we might find relevant, estimate the likelihood that a tumor is cancerous and predict whether someone might be a criminal or a worthwhile credit risk.”

But the American public’s skepticism about them “spans several dimensions.”

To start, 58 percent of Americans believe computer programs will always reflect some level of human bias.

After all, humans write the programs.

“The public worries [also] that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation,” Pew said.

“The survey presented respondents with four different scenarios in which computers make decisions by collecting and analyzing large quantities of public and private data. Each of these scenarios were based on real-world examples of algorithmic decision-making … and included: a personal finance score used to offer consumers deals or discounts; a criminal risk assessment of people up for parole; an automated resume screening program for job applicants; and a computer-based analysis of job interviews.”

The public’s response?

Fifty-six percent find it unacceptable to use them for criminal risk assessments, and it goes up from there: 57 percent rejected their use in job applications, 67 percent in job interviews and 68 percent in financial scores.

Pew said there are several themes driving concern, including that they violate privacy, are unfair, remove the human element from important decisions and fail to recognize that humans are complex, and systems cannot capture nuances.

Further, people have different levels of comfort based on the context, Pew found.

“When it comes to the algorithms that underpin the social media environment, users’ comfort level with sharing their personal information also depends heavily on how and why their data are being used. A 75 percent majority of social media users say they would be comfortable sharing their data with those sites if it were used to recommend events they might like to attend. But that share falls to just 37 percent if their data are being used to deliver messages from political campaigns.”

Twenty-five percent of respondents say they have negative encounters online, 21 percent say they frequently have experiences they feel connects them with others and 44 percent say they find content that is amusing.

Most have a mix of positive and negative experiences. Fifty-four percent say they see an equal mix of people being mean or bullying and people being kind or supportive. Twenty-one percent see more meanness and 24 percent see more kindness.

The perceptions differ according to race. One quarter of whites think the personal finance score concept would be fair, but that share rises to 45 percent among blacks. By the same token, 61 percent of blacks think the criminal risk score concept is not fair, but that share falls to 49 percent for whites.

Seventy-four percent say the content people post on social media is not reflective of how society more broadly feels about issues.

Note: Read our discussion guidelines before commenting.