β
ππΒ·
What does HDR really mean in a TV? Duck Duck Go, here I come! ... This article told me enough for now. It means a wider range of color and brightness with more
β
accuracy. Both take more data which older TVs can't make sense of, and couldn't show anyway. Our eyes are sensitive to a very wide range of colors but technology is
β
limited and progress is incremental. HDR is the industry taking a step forward.
β
π
β
ππΒ·
Reading a bit in Wikipedia about the philosophy of mind, it occurs to me that the only alternative to pure physicalism is the supernatural (if you prefer, the
π¦
β
βspiritualβ). Modern notions of the physical encompass all coherent aspects of reality, a degree of consistency we imagine is maintained by the laws of nature. There
π
β
is no guarantee that those laws operate in an understandable way - they may not be reducible to equations nor reproducible by computers. There might be no end to the
πͺ
β
discovery of more fundamental theories - description itself has its limitions and the laws of nature may not fit within them. Accepting that there are undiscovered
β
elements of the natural world is enough to recast the mind-state of a dualistic view to a aspect of physicalism. What vexes the would-be physicalist is our own,
β
apparently unnecessary, awareness. It's hard to imagine consciousness is more than a passive by-product within a purely physicalist view (an idea that seems to be
β
captured by epiphenomenalism, although it does not seem fundamentally different to the criticisms of property dualism). I don't have answers, but my own version of
β°
Pascal's wager is that the total of conscious experience, across all consicous beings, is what really matters.
β
ππΒ·
Arwa Mahdawi's opinion pieces for The Guardian offer humor in (and about) our dystopian reality. Her most recent piece raises some pragmatic issues with AI
β
relationships (βReal for You, but Not for Themββ’) and software upgrades. When your software is a personality (alternatively, when your personality is software)
β
βupgradesβ are going to cause behavioral shifts which are equivalent to menus being reorganized - and far more disruptive. Philosophically, the problem is solopsism
β
in reverse - attributing non-conscious behavior to consciousness (the Chinese Room argument applies to these systems).
β
β
ππΒ·
Darren Aranofsky's βmother!β is utterly believable as a nightmare within the mind of the lead character. The irrational logic is recognizable from the most disturbing
β
dreams of my own. If not everyone has such dreams, it might explain how it resonates for some while for others it is, at best, self indulgent and vacuous.
πΈ
β
π¦
β
ππΒ·
The anxieties provoked by the development of AI are extremely disturbing. Our conscious experience affects our future actions. Today's computers, if functioning
π
β
correctly, have no mechanism for this feedback to occur; the LLM behavior would be identical whether or not there is any actual experiencing. Whatever βthey sayβ is
π¦
β
not evidence of sentient awareness and the βclaims of consciousnessβ from an LLM are no different than a calculator program producting 4 from the input β2+2β. LLM's
π€
β
are created with machine learning algorithms applied to a massive body of literature - one that includes our own expressions of our conscious experience. LLM's are
π₯
β
reflecting our consciousness back at us, not demonstrating their own. Reality can not be simulated with a big enough, fast enough digital computer - that our
β
awareness affects action demonstrates that if our reality is virtual, it is not a program running on the kind of computers we build today. These rigidly execute
β
instructions without any possibility consciousness within might impact the calculations.
β
β
ππΒ·
To reject solipsism, we must assume consciousness in others without the direct experience we have of our own. Since LLMs tell us behavior alone is not evidence, what
β
remains? It is physical similarities combined with behaviors learned from direct experience. Given the structural similarities between human brains (an organ that is
β
so physically different from a digital computer that the assumption the brain can be simulated with one can best be described as βan extremely weak hypothesisβ),
β°
especially as we compare humans to other animals, it's clear solipsism can be rejected.