What’s an “appropriate” relationship with your AI assistant?

Is it harassment when it’s a disembodied non-person?

Brant G
5 min readMay 28, 2019

--

note: this is adapted from a Twitter thread in response to the Tweet screen-cap’ed below.

There’s a (legitimate) concern with the pre-programmed responses to some racy — and possibly inappropriate — statements you can make to the pantheon of digital AI assistants, most of whom use female voices. But does that necessarily make them female?

I totally get where the sentiment comes from if you’re going to view a disembodied digital assistant as just a random stranger. But how many of those things are said to someone with whom you have at least a friendly & non-professional relationship?

I would agree that “You’re a slut” is a stretch, but it’s not like I’ve never heard female friends using that language (again, consensually, among friends). But are “you’re hot” or “you’re pretty” considered harassing language to someone with whom you have a friendly and non-working relationship?

Now, if the issue here is one of power dynamics, that opens up a whole new realm of ethical issues.

What’s the expected relationship between an end-user & a non-corporeal AI assistant?
Boss — employee?
Patron — servant?
Master — slave?

I mean, the “slave” comparison is certainly hyperbolic, but quite frankly, it’s intended to be, but check out the comparisons —
The AI didn’t have a say in whether or not you get to access it.
The AI was bought & paid for.
The AI doesn’t really have a right of refusal.
You can discard it at any time.

What characterization would you use for that relationship? Because the power dynamics are certainly very different for a digital tool you carry around in your pocket than they are for someone you’re trying to date, or your neighbor in the apartment below you, or person sitting across from you on the bus.

The statements in the original chart are absolutely sexual harassment in context of a professional working environment, or between strangers on the street, or casual acquaintances. But are they still harassment between romantic partners? Are they still harassment between life-long friends?

That is where we need to better define what “relationship” we have with AI assistants. It’s more than just co-workers, as assumed by the chart above, but how ‘intimate’ is it?

I’m betting your phone has seen you naked more than your coworkers have.
I’m betting that voice AI has looked up stuff you’d never ask colleagues about.
I’m betting you’ve taken your phone places you’d never take a casual acquaintance.
And I’m betting there’s a non-trivial part of your AI assistant search history that you wouldn’t want anyone at work, or a stranger on the street, knowing anything about, huh?
Even if you used incognito mode, if you used the AI assistant to make the search, there are vestiges of that search stored in the system, just not locally.

So the relationships are already different, and that requires a new set of rules, expectations, & boundaries.

And that’s a much longer discussion than Twitter was going to be able to handle, so I bumped it over here to see what other people think, and how they might react.

Yes, there’s definitely a socialization issue that a female voice is responding to some pretty racy questions or comments. Treating ‘female-sounding’ interactive surrogates in a demeaning fashion may well contribute to a desensitization that engenders similar treatment of females that aren’t digital AI assistants. It’s certainly worthy of further investigation.

How does that socialization change if it’s a male voice? How does the male-female interaction of user/AI voice affect the user’s attitudes? Does the sexual orientation of the user matter?

Ultimately, there’s a ganze menge of questions here:

  1. What sort of language is considered harassing, and under what circumstances? Things you can say to a romantic partner are totally inappropriate in other circumstances. But what about things you say to life-long non-romantic friends? What’s acceptable between work colleagues, if their relationships extend beyond work? What sort of language are you expected to model at work for those who don't have those outside relationships?
  2. Are there any absolutes in language that are always considered harassing under any circumstances?
  3. What is the appropriate analogue to the relationship between a ‘user’ and an ‘AI assistant’? Is there one? Are we adjusting the rules of an existing relationship, or are we creating entirely new rules?
  4. Does the presumed gender of the user and/or the voice matter? If so, how? Why? Does that change any of the answers in #3?
  5. What’s the public etiquette if you hear someone inapproriately addressing the AI assistant on a cell phone? Do you get to interrupt? Take the phone away? Call the cops?

These sorts of relationships are only going to become more common and more fraught with socio-dynamic peril as the AIs grow more sophisticated and develop greater ‘personality’ that evolves and morphs over time to react to the users’ demands, behaviors, and expectations.

It’s not enough that we’re coming to a reckoning to how we can & should treat each other. We’re now starting to move into a whole new territory where someone crunching code is literally making life-altering personality-driven decisions for everyone else who uses the AI they develop.

Yikes.

If you enjoyed this, please clap & let others know, so they get a chance to see it, too, since that doesn’t seem to happen nearly enough. And please feel free to share your feedback — Lord knows I need whatever I can get… ¯\_(ツ)_/¯

--

--

Brant G

Dad, husband, game commando, veteran, Army brat, writer, teacher