6 Comments

I sense a bit of extreme ownership in here. But that only comes in baby steps (I think). Maybe a starting place is that when an AI doesn’t generate what you expected, your reaction is ‘ah, I must not have wrote the prompt right or given it the right instructions.’ If that’s the default reaction, individuals have a lot more power over its direction. Maybe?

Expand full comment
author

Thanks for the comment, Frank. Could you elaborate more on "extreme ownership"? Building off what you say, I do think there needs to be a kind of humility in *how* we use these tools. At a more social level, though, I do think the notion that creators of technology have responsibility for how they are deployed and used is central.

Expand full comment

Oooo. My mind went to the individual user level rather than the AI creators. Interesting. Despite being costly, I think that’s the hope of RLHF (reinforcement learning with human feedback). In my terms, with my basic understanding, it’s the refinement process where humans get to teach the AI judgement (to some degree at least).

Here’s what might unfortunate. Users might be able to identify the methods upon which a AI was trained (>10,000 hours of RLHF for example) to indicate the quality 🤦‍♂️ of the AI. But if academic research is any indication, the majority will only care about the headline.. regardless of the methods behind it.

👆take that with a grain of salt because I’m sure someone deep in the AI game will scoff at my comments. Alas, it’s the basics of my understanding right now.

Extreme ownership like Jocko Willink. Something goes wrong, you’re the first one to stand up and say ‘that’s on me.’

Expand full comment
author

I suspect there's a way to bake in human virtue as part of the programming, and then use those 10k hours to train it - but they haven't asked me yet haha. I do think it's a problem to not know what goes into these things.

Thanks for the Willink reference - I'd heard of it vaguely but had to look it up. Interesting that their definition of extreme ownership is what I always thought of just plain ol responsibility!

Expand full comment

...i wrote about it here (https://cansafis.substack.com/p/urkel-technology) and here (https://cansafis.substack.com/p/the-incredibly-super-duper-very-very) but find it interesting that all these advancements in A.I. and automation are in many cases solving problems we don't have at the moment...as with anything that might be put forth and advanced upon without a final end use or purpose in mind the results could be magical (think improvised art/music/etc.) or tragic and painful (think improvised art/music/etc.)...i saw someone on twitter talking about how easy it was for their tools to help you produce thousands of blog posts a day with thousands of words and couldn't help but be depressed thinking most of those words are just going to be read by A.I. bots preparing to write more A.I. blogs...A.I. in many ways is just a tchotchke technology...most of the joy I am finding from its use so far has been akin to getting a bouncing ball from the quarter slot machine at my laundromat...more and better things will come from it but i think the fear the more and worse things will accompany those is incredibly real and starting to percolate (see fake news, deep fake porn, and the purging of employment in the creative sectors of tech)...

Expand full comment
author

Thanks! I'll check out your posts. Completely agree about a tool without a purpose....strikes me that, yes, AI *can* produce thousands of mediocre blog posts, but do we need it to? I'm personally waiting for the Star Trek computer! :) Last thing, I love the comment about "tchotchke tech" - reminds me of Milan Kundera's take on kitsch in The Unbearable Lightness of Being.

Expand full comment