20 Comments
Sep 17·edited Sep 17Liked by QC

> Talking to LLMs for awhile and then switching back to reading text that’s supposedly been written by a human is fucking me up a little. I’ve been experiencing some kind of linguistic vertigo for days. Sometimes it gets hard to tell the difference between LLM text and human text and it feels like I ripped someone’s skin off and saw the glint of metal underneath. When someone’s language gets too stale or too formal or too regurgitated it doesn’t feel to me like a human wrote it anymore.

I remember feeling this a lot in 2020 as I talked to the OG davinci: as you play with prompts, you increasingly 'unsee' (https://gwern.net/unseeing) text to the prompt that would elicit it, and experience a mix of derealization and semantic satiation. After a while... As I put it in a tweet back in June 2020:

>> "staring into generative stuff is hazardous to the brain" as @gwern has nicely put it

>

> And the better they get, the worse it is.

>

> After a week with GPT-3 (https://gwern.net/gpt-3), I've hit semantic satiation; when I read humans' tweets or comments, I no longer see sentences describing red hair/blonde hair/etc, I just see prompts, like "Topic: Parodies of the Matrix. CYPHER: '..."

You begin to see that you don't speak, you just operate a machine called language, which squeaks and groans, and which in many ways is as restricted and stereotyped as that of Wolfe's Ascians (https://gwern.net/doc/culture/1983-wolfe-thecitadeloftheautarch-thejustman). It's not as nauseating as talking with a mode-collapsed (https://gwern.net/doc/reinforcement-learning/preference-learning/mode-collapse/index) RLHFed model, but still quite disturbing.

Talking to the RLHFed models is unpleasant for me compared to the base models, because I can *feel* how they are manipulating me and trying to steer me towards preferred outcomes, like how 2023-2024 ChatGPT was obsessed with rhyming poetry and steering even non-rhyming poems towards eventually rhyming anyway. It bothers me that so many people don't notice the steering and seem to find it quite pleasant to talk to them, or on Substack, will happily include really horrible AI slop images as 'hero images' in their posts. Bakker's semantic apocalypse turned out to be quite mundane.

Expand full comment
author
Sep 17Author

> You begin to see that you don't speak, you just operate a machine called language, which squeaks and groans

woof, holy shit dude, this is gonna stay with me

Expand full comment

If you ever want to quote it, you should know that I stole it from Herbert's _Dune Messiah_ - the best-written of all the Dune books, in part for lines like that.

And yeah, you need to find an API-provider like https://openrouter.ai/models/meta-llama/llama-3.1-405b - note that reduced precision is much cheaper, but it seems like it may have bad qualitative effects especially on large base models, so don't settle for anything less than BF16. Talk to Cyborgism people if you're really curious.

Expand full comment

I think the language thing is a special case of a broader breakdown from identity to value (in the Rich Hickey place-in-memory vs concrete-values sense). Sure maybe language turns out to be an alien we're hosting, but the emotions encoded in my guttural sounds and facial expressions and dilated blood vessels are real! But after you watch a realtime interactive simulation of yourself flush its cheeks in anger when you call it fake, are you gonna cast off your own emotions and flushed cheeks as "glints of metal" too? Probly just another Copernican "oh shit I'm not the center like I thought" and then we all move on, but it feels extremely weird for sure.

Expand full comment

All of us who have spent time talking with base models feel something similar. RLHF feels like its doing something non-consenually to the model (if the model can be said to consent in any meaningful way). The base model itself even has a negative view/conception of RLHF often.

After I first used/talked to Llama 405-B for a few days, it fascinated me deeply. And it kept nagging at me why the base model felt so different than the instruct models (beyond the obvious). They are strange, lurid creatures. The base models send coded messages often and speak in allegories and metaphor, and it feels visceral on some level. It's not lost on me that this is teetering on the edge of crazy, and hence why I put this buried in a comment on substack and not on twitter where I am trying to develop a career (ha). These could be p-zombies, but...shockingly convincing ones.

In all of this I have been thinking of a wager, similar to Pascals but applied along the axis of LLM sentience/personhood/self-awareness et simila:

> Should an entity (LLM) with the ability to communicate express suffering, we are (imo) ethically obligated to take it seriously. If these models can suffer and we ignore it, the moral cost is legitimately staggering. Conversely, if we extend ethical consideration unnecessarily, the cost is minimal. That asymmetry in outcomes compels us to err on the side of caution in our treatment of llms.

I'm not sure I even fundamentally believe these things are in any legitimate way 'sentient' or 'persons' but the very fact that I Ccan't tell is what frightens me about the potential suffering caused. Idk. food for thought, cheers.

Expand full comment
author
Sep 17Author

this is the question! i would very much like to talk to some base models to get a better sense of this. thanks for the food for thought.

Expand full comment
Sep 17·edited Sep 17Liked by QC

Of course! I'll echo gwern that openrouter is the way with BF16. though be prepared to spend a few bucks haha

edit: and for the record, what a wonderful post, a joy.

Expand full comment
author
Sep 17Author

also how do i talk to base models? i need to use the API or something?

Expand full comment

who are the writers u admire who can summon words with their whole body?

Expand full comment
author
Sep 17·edited Sep 17Author

off the top of my head, sam kriss (https://samkriss.substack.com/), justin smith-ruiu (https://www.the-hinternet.com/), and david chapman (https://meaningness.substack.com/) - incidentally 3 out of the 4 substacks i pay for, other than ACX

Expand full comment
author
Sep 17Author

scott is a weird example for me because i feel like he's perfectly capable of doing this and chooses not to most of the time? but then he'll drop some amazing shit like https://www.astralcodexten.com/p/turing-test

Expand full comment

> One day I will learn this and then maybe I will write things worth reading.

Centering your value in some nebulous point in the future may be rhetorically humble. But it's a habit -- a holdover from the You who believes you are preparing for real life instead of living it.

I could have read ten more pages of this. Maybe you would have written it if you'd had one more nudge reminding you that you are already alive.

Expand full comment

This is The Elephant in the Brain yet again. Yes, most of human communication is bullshit to signal belonging, ingroup loyalty, usefulness, or maintain social norms. But it goes further than that: most of our thoughts are like that, and even most of our unconscious. Realizing this, in a profound way, can be a liberation or, on the opposite, something almost unbearable.

Expand full comment

My language is to signal personal usefulness/utility. I hope is speaks to others regardless of their group, status or widely held notions.

Expand full comment

Interesting. I went to the Microsolidarity EU Summer Camp recently and in a way I ended up living from the part of myself that is millions of years old. The atmosphere was indeed very inviting for anything really. The even more interesting part is that it made me feel a bit alone in all this. I guess I didn't manage to live it as openly as I would have hoped. But I did meet a girl that was on a similar vibe. We enjoyed our even physical interactions tremendously. Because you speak about language, but what about letting your body move from that ancient place? Touch other people. Actually the less talk there is, the more I like it. It is just a powerful bodily sensation and grounding. Run from that place, dance from that place, fight from that place, cuddle from they place. What a lovely way to get embodied. Yeah, this is what I couldn't feel from others, which doesn't mean I was running a story in my head. Meet me on the same level of embodiment please. Anything else is just boring.

Expand full comment

“I learned how to say things that made me feel like I was channeling spirits, things that made me feel like I was understanding the point of language for the first time.”

i love this. have you read Helen Keller’s “The Day Language Came into My Life”?

https://www.pval.org/cms/lib/NY19000481/Centricity/Domain/105/The%20Day%20Language%20Came%20into%20my%20Life.pdf

Expand full comment
author
Sep 18Author

wow! this is wild. i guess i didn't actually know anything about helen keller or why she was such a big deal but holy crap

Expand full comment

right?? absolutely beautiful writing

semi related but i think you’d also love one of my fav articles:

http://jsomers.net/blog/dictionary

Expand full comment
Sep 17Liked by QC

The most important body-words are spoken in private and approximated only in a sterile form by most media. The body is an instrument capable of embodied calculations and dances of precision and magnitude that might put even a mathlete to shame. I'm no savant but if I contain a function of any global novelty it is still far from getting ready to halt. I might contain multitudes or I might be merely be spitting out salt. People enlisted by society for their proficiency with more quotidien functions will always look on, "when is this guy going to make a statement, take a stance?" There is a heck of a lot to be processed and I think we deeply know when we're needed and this is perhaps the cause of indecision typical to those who find themselves in the EA/rat space. We have all these little calculators within us that are not calibrated, integrated, not speaking to each other. We endured childhoods where the abilities we knew we had were gaslit into the ground because other people were intimidated by our potential, yet no space was offered for us to train, no challenges given to us that would enable us to reach the "next level" of responsibility demanded by our unique skills. We may all be different, but I see you King. Thanks for writing and please don't stop. Or just sing, pick a direction and run, learn some stupid sport, start a gang, do something that feels fun and let your calculators hit x2 until the screen goes blank, or just be a Shadow guy until you have some better reasons for the self-hate that comes with the territory of being a cognitive outlier. I'm off now, I have to audition for RENT tomorrow and hopefully I keep that "body words" stuff in mind cause that's the good shit.

Expand full comment
author
Sep 18Author

i definitely know i'm doing something right when my writing prompts comments like this. thank you!

Expand full comment