New research published in Science earlier this month shed light on a long-held theory about the value of weak social connections to job-seekers, but has caused a stir among some digital ethicists and privacy advocates due to its methodology, the New York Times reported Sept. 24.
The study, which was published by researchers at LinkedIn, the Massachusetts Institute of Technology, and Harvard Business School, analyzed data from the LinkedIn networks of more than 20 million people over a period of five years, from 2015 to 2019. By changing the “People You May Know” algorithm, the researchers found weak social connections are more likely to help LinkedIn users find jobs, rather than strong ones.
But one aspect of the study has raised ethical red flags: Some LinkedIn users’ job prospects may have been hurt by the research, and it’s not clear whether they were aware it was being conducted.
Table of Contents
LinkedIn study tested the value of weak social connections
The five-year experiment tested a social theory dating back to 1973. Developed by Stanford sociologist Mark Granovetter, this theory posits that “infrequent, arms-length” connections, rather than close social connections, are more beneficial for one’s career, leading to more new employment opportunities, promotions, and bigger wage increases.
To test this theory as it relates to employment, researchers analyzed data from multiple large-scale randomized experiments that “varied the prevalence of strong and weak ties” in LinkedIn’s “People You May Know” tool, which recommends new connections to users on the networking site.
The researchers concluded that relatively weaker LinkedIn ties, such as an acquaintance with whom a user shares just 10 mutual connections, were twice as effective as stronger ones in helping users find jobs. This was particularly true for professionals in the digital sector whose jobs rely more heavily on technology like software, artificial intelligence, or machine learning.
LinkedIn’s methodology criticized
Though the findings of the study are potentially useful, they also suggest “some users had better access to job opportunities or a meaningful difference in access to job opportunities,” Michael Zimmer, an associate professor of computer science and the director of the Center for Data, Ethics and Society at Marquette University, told the New York Times. He continued that such “long-term consequences” should be considered “when we think of the ethics of engaging in this kind of big data research.”
The experiments LinkedIn ran are a common practice in the tech world and media, the Times noted. A/B testing allows companies to try out different versions of algorithms or headlines, for example, to determine which one performs best with users. LinkedIn’s privacy policy states it uses data to conduct research with the aim of giving users “a better, more intuitive and personalized experience,” and the company told the Times this recent research “acted consistently with” LinkedIn’s user agreement, privacy policy, and member settings.
Not everyone pushed back on LinkedIn’s approach. Evelyn Gosnell, a behavioral scientist and managing director at Irrational Labs, argued on Twitter the research provided valuable insight for job seekers, and it was necessary to run an experiment in order to arrive at such findings. She added that while it’s important companies secure users’ consent to do such research, “we should all just assume that all platforms are running experiments.” In a direct message exchange on Twitter, Gosnell said companies often obtain informed consent by including them in lengthy terms and conditions agreements that users tend to gloss over, posing a “tough challenge” for this kind of experimentation.
Now, of course user consent is important.
I think that by now, we should all just assume that all platforms are running experiments.
Companies should all button up their T&C’s to make sure that’s stated in there, but I think that should be our running assumption.
— Evelyn Gosnell (@evelyngosnell) September 25, 2022
Though it’s theoretically possible the study could have harmed LinkedIn users, the study itself seems to raise few ethical concerns, argued Marian-Andrei Rizoiu, a senior lecturer in behavioral data science at the University of Technology Sydney, in a Sept. 15 piece for The Conversation.
“Nonetheless,” he added, “it is a reminder to ask how much our most intimate professional decisions – such as selecting a new career or workplace – are determined by black-box artificial intelligence algorithms whose workings we cannot see.”
We get it: you like to have control of your own internet experience.
But advertising revenue helps support our journalism.
To read our full stories, please turn off your ad blocker.
We’d really appreciate it.
Below are steps you can take in order to whitelist Observer.com on your browser:
Click the AdBlock button on your browser and select Don’t run on pages on this domain.
Click the AdBlock Plus button on your browser and select Enabled on this site.
Click the AdBlock Plus button on your browser and select Disable on Observer.com.