There's a short film from 2014 I keep going back to re: AI - https://www.youtube.com/watch?v=YrXWWXGKowo& Henry Dunham's first movie, I keep coming back to how quickly a true AI would use the universe of data to manipulate humans. So far removed from GrokAI saying Elon isn't "woke" or whatever.
Interesting post and this idea of something that's actually highly relevent (e.g. AI surveillance risk) becoming a passe topic of discussion folds into a broader question about how memes (of any kind) are traded in the marketplace of ideas.
I'm increasingly of the mind that memes (e.g. "xyz is a risk we should worry about") in the political class are constantly being used as signaling tokens, i.e., they're "bought" or "traded" as a means of declaring a particular political affiliation or to signal class differentiation.
The content or truthfulness of a meme, in many cases, is secondary.
See how this has played out with discussions around AGI risk specifically, as increasingly cynical actors enter the space of discussion.
As a result of the immateriality of software manifesting in sleeker more ephemeral physical devices, portraying AI systems has shifted from horrific threats to relatable quasi-human entities capable of feeling that can foster empathy and pathos. An evolution in how we conceptualize and envision technological creations. Yet as you ask “what do we lose when we humanize the cyborg?” Fascinating take to our current relationship with AI and its evolution.
There's a short film from 2014 I keep going back to re: AI - https://www.youtube.com/watch?v=YrXWWXGKowo& Henry Dunham's first movie, I keep coming back to how quickly a true AI would use the universe of data to manipulate humans. So far removed from GrokAI saying Elon isn't "woke" or whatever.
Interesting post and this idea of something that's actually highly relevent (e.g. AI surveillance risk) becoming a passe topic of discussion folds into a broader question about how memes (of any kind) are traded in the marketplace of ideas.
I'm increasingly of the mind that memes (e.g. "xyz is a risk we should worry about") in the political class are constantly being used as signaling tokens, i.e., they're "bought" or "traded" as a means of declaring a particular political affiliation or to signal class differentiation.
The content or truthfulness of a meme, in many cases, is secondary.
See how this has played out with discussions around AGI risk specifically, as increasingly cynical actors enter the space of discussion.
As a result of the immateriality of software manifesting in sleeker more ephemeral physical devices, portraying AI systems has shifted from horrific threats to relatable quasi-human entities capable of feeling that can foster empathy and pathos. An evolution in how we conceptualize and envision technological creations. Yet as you ask “what do we lose when we humanize the cyborg?” Fascinating take to our current relationship with AI and its evolution.