HubWeek panel explores ethics in the digital world

A HubWeek panel exploring ethics in the digital world featured computer scientist and entrepreneur Rana el Kaliouby and Harvard Professor Danielle Allen.

Clea Simon • harvard
Oct. 3, 2019 5 minSource

Technology is as good or evil as those who create it and use it.

That was the thorny consensus of a panel on the question of ethics in the digital world at a HubWeek event Wednesday in the Seaport District. Moderated by Harvard Business Review editor in chief Adi Ignatius, the group included computer scientist and entrepreneur Rana el Kaliouby, founder of Affectiva, and Danielle Allen ’01, James Bryant Conant University Professor.

Ignatius opened the discussion at the event, sponsored by Harvard University, The Boston Globe, Massachusetts General Hospital, and the Massachusetts Institute of Technology, with a confession: He has shared photos of his granddaughter on social media to a far-flung network of friends, despite rising worries over the risk of identity theft, illicit use of such images, future embarrassment, other privacy issues, and the lack of consent. He has drawn significant criticism for this, he noted, owing to the mushrooming sense that technology is “dark,” or ill-intentioned.

El Kaliouby, whose company develops emotion-recognition software, took up Ignatius’s case, arguing that such a condemnation is simplistic. However, she did add a caveat to her defense: Internet communities — such as Facebook — have become nearly ubiquitous and have gathered a lot of our data without our really being aware of the implications. “I don’t think we’ve really thought through data privacy, issues of consent, and the conversation around unintended uses of this technology,” she said.

Allen, the director of the Edmond J. Safra Center for Ethics, enlarged on this point. Social media has made us all “public persons,” akin to celebrities who give up their privacy, to some extent, in return for exposure, she said. (She added that she never shares pictures of her children.)

Technology itself is “neutral, by and large,” continued el Kaliouby, when asked what role the platforms and the programs beneath them play. She pointed to the use of algorithms such as the ones her firm has developed. Emotion-detection software can be useful in mental health care, for example, but it also can be weaponized in surveillance. “We need to all come to the table and agree what’s fair use and what’s not.”

Allen took exception to that idea of neutrality, pointing out that technology grows out of human choices. “Every technology is a designed solution to a problem, seeking often to optimize something,” she said. “The choice about the problem and the choice of what to optimize is a decision. It is never neutral. That first priority-setting moment is incredibly important.”

That may be so, agreed el Kaliouby, but often that first priority is benign, if not downright humanitarian. With Affectiva, for example, her priority was to help children with autism recognize facial cues to ease their social interactions. “We developed a software development kit for this emotion-recognizing software — and very quickly we learned, yes, people are applying it in all sorts of different ways,” she said. “We had to go back and revise our terms and conditions,” mandating that it be used only with consent (i.e., people had to opt in, rather than opt out).

One danger, Allen said, lies in how people tend to trust technology instead of their own instincts. What we need to keep in mind, she said, is that our computers are as flawed as we are, particularly since humans are responsible for not only creating these technologies but also for feeding in data that may be biased or poorly chosen. Calling on technologists and the public alike “to recognize the distinctly human role that, as of now, machines are not even close to usurping — choice of purpose,” she stressed human responsibility. “Our first responsibility is to choose the purposes for our machines,” as we do for our laws and other tools, she said.

Ultimately, the two agreed, the issues surrounding ethics in technology need to be viewed as ongoing, a part of the process rather than a one-time question. “You have to have a process for repeated iterative risk assessment,” stressed Allen.

In addition, said el Kaliouby, ethics training and consideration have to be woven into the process. “You can’t get away by saying you’re just the technologist,” she said. “There have to be design elements in how we approach this — how we think about designing and building these technologies.

“There’s a lot of potential for doing amazing things,” she said. “I think of AI as a partnership for us to be safer, more productive, and healthier. We just, as a society, have to commit to using it that way.”


Share this article:

Related Articles: