β
ππΒ·
Ugh. Struggling to extend ideas from the AC-3 algorithm to a complete finite Constraint Satisfaction Solver without resorting to direct search. The reason is my
β
interest in symbolic approaches which I hope to apply to very large problems and even infinite domains. It may not be doable, but for the time being I've been playing
π
β
with generalizing the notion of variable in AC-3 to potentially overlapping variable vectors.
π
β
π£
β
AC-3 prunes away at an explicit representation of each variable's range by eliminating values which are not consistent with all constraints. When a the range of a
π
β
variable is reduced, the constraints it participates are re-examined in turn, possibly reducing the range of one or more other variables. For the AC-3 algorithm to be
π
β
helpful, the primary constraints are local, applying only to a subset of variables. The constraints can be locally but not globally satisfied when AC-3 terminates.
β
β
Variables can be combined - a vector variable with a range of vectors; explicit storage of those compound ranges can be similarly pruned, extending the AC-3
β°
algorithm. The approach to chosing combinations for the lower end of time and space requirements is what I'm up against.
β
ππΒ·
Periodically, I tell myself I'm going to alternate exploratory coding (aka hacking) with a phase of refactoring toward a clear design (aka cleanup). The problem is
β
the clear design phase may never happen - it's too easy to be satisfied with code that passes the current testing, however informal. On the other hand, it's easy to
β
overcommit to a clear, elegant design, and if it doesn't meet some of the goals set for the project, it can be a major issue.
β
β
I'm here again, thinking my solo recreational coding would be more fun if I took time to consolidate what I've learned into a good design. Process options are plenty,
β
but I probably can't master the discipline of most - e.g. separate branches for exploratory phases seems heavy handed on a solo project. Alternating priorities always
π½
β
feels like the best thing to try, but I usually can't stick to it - maybe that ADHD diagnosis isn't so far off...
π
β
π
β
Of course, AI assistance is becoming standard practice, and beyond quickly finding simple mistakes, it can refactor efficiently and even take part in a friendly chat
π£
β
about what the plan is. The increases in productivity, even if human programmers end up only in a code review role, is hard to resist in business.
π¦
β
π
β
But I'm programming because I enjoy it. Algorithmic assistance is great, as long as I could have a hope of understanding the algorithms being used. Maybe I could
π£
β
figure out how LLM's work, but the current systems are massive, remote black box servers - which isn't part of the fun for me. The fun is opening the black boxes and
πͺ
β
figuring out what's going on inside.
β
β
ππΒ·
Woof! Slow progress... βͺSapphireβ« has both logic and domain variables - a named value which may be left unspecified, given a value as assertions are simplified and
β
resolved. Both βx=3β and β3=xβ mean the same thing - as an assertion it is resolved in βͺSilverβ« if βxβ evaluates to 3, but if something else, a contradiction is
β
flagged. If x has no value, βͺSilverβ« resolves the assertion by assigning 3 to x. For finite problems, functions and relations can be treated as domain or logic
β
variable spaces, respectively. To dig myself out from under earlier hacks, I'm rewriting things around what I'm calling βfree order variablesβ for now, which are just
β°
β¨key tableβ© pairs, replacing constrained expressions with, in theory at least, nested function calls...
β
ππΒ·
Changes are settling and maybe my OCD will give me a chance to work on something else for a bit. Between making these changes too haphazardly and global distractions,
β
everything else has been suffering neglect. A pause, then forward on Slitherlink puzzles and, hopefully, some graphics fun!
β
π
β
ππΒ·
The transition to treating constraints as a class of βͺIronβ« Interpreter internal values has been... slow. I make a mess when I'm exploring to find a solution or
π
β
implementation that works and meets my design goals. Breaking changes are a time to consolidate the current design before going forward... I was too focused on
π
β
getting to the next exploration - constraints as first class values, and, well, messed up.
π¦
β
π
β
ππ‘Β·
I often pass keyword parameters to a subroutine unchanged, and ended up with alot of βkeyword=keywordβ code in Python function calls. Reserving β=β for the equality
π£
β
predicate (I just replace β=β with β==β when generating Python or C) I instead use βββ to pass keyword parameters - βkeywordβ7β. I added a postfix operator ββββ to
πͺ
β
pass the value of the variable as the keyword, so I can abbreviate βkeywordβkeywordβ Β·as βkeywordβββ.
β
β
ππ Β·
I've been on a side trip to additional βrationalβ code generation based on syntactic analysis rather than ad-hoc text manipulation. This next stage let me translate
β°
operators (βββ and βββ for cross and dot-product) into calls to generic functions, but there's more work to make it remotely robust.
β
ππΒ·
I've been bogged down by my bootstrapping shenanigans. I finally decided to re-execl the current build when cyclic dependencies could change generated code. Because
β
I'm building my tools in place, I use a βrevert-bootstrapsβ routine often to deal with all-to-frequent breakage.
β
β
ππ Β·
More language infrastructure progress - I've got code generating from the βͺVioletβ« language layer that bridges between βͺCobaltβ« (aka C) and βͺMidnightβ« (aka Python). I
β
generate code that uses Python's βctypesβ module to call C from Python, making it easy to mix and match implementations.
β
β
ππΒ·
I took a small step toward sensible language handling by generating a full parse tree for a small input file. Starting life as a no-op, my evolving βzyppβ
β
preprocessor converts βͺMidnightβ« and βͺCobaltβ« to Python and C, respectively, in an entirely ad-hoc fashion. At least I know I can create an inviting bug hotel.
β
Eventually, each language layer I use will share a unified parser, but different intermediate code generators which will perform first level semantic checks. The
β
intermiedate language will support high level primitives that can be lowered to target subsets which can be used to emit code. My βͺSapphireβ« project has a good deal
β
of infrastructure to cope with.
β
β
ππΒ·
I'm using βββ for dot product and βββ for cross product. My preprocessor translates these from binary operators into calls to βdotβ and βcrossβ, which are native in
β
GLSL and implemented in Python to work on number sequences of equal length (dot) or only 3-tuples of numbers (cross).
β
β
ππΒ·
While I use Unicode extensively and appreciate its near universal adoption and support, I object to any description of it as a βcharacter setβ. It is far closer to
β
Donald Knuth's TeX than an encoding of an alphabet.
π»
β
π
β
For computing, a βcharacter setβ is an encoding of an alphabet - essentially, the association of an integer with each of a finite set of symbols. The symbols have
π
β
associated perceptual representations (visual, auditory, tactile) that allow human beings to associate differeent sequences with different meanings. If we remove
π
β
control values from Unicode (modifiers and typesetting indicators) and discard codepoints whith glyphs similar to those of lower codepoints, we are left with a
π
β
workable character set. We'd also like human beings to naturally associate meaning with words to form human readable language.
π
β
π
β
The maximal character set subset of Unicode overdelivers for the vast majority of human beings. Perhaps it will change over time, but only a portion are recognizable
π£
β
to many of us - my dinosaur DNA seems irreperable. This intentional inclusiveity is a key feature of Unicode, but to be complete, it requires typesetting features.
β
β
And bidirectional text is a mess for formal languages. When we see at 1Γ·2 we have to look at the surrounding characters to know if it's the same 0.5 (e.g. A 1Γ·2 A) or
β
2.0 (e.g. Χ 2Γ·1 Χ). Or is that really 5.0? Bidirectional overrides are only discouraged, so we can't really tell if AND is AβNβD or AβNβD (the same as DβNβA) by
β
looking. Since my editor (Emacs) seems to follow unicode rules, editing Χ©Χ¨Χ means when I move the cursor to the right, it goes left! Further, these identically
β
encoded arrows all point right, AβNβD, Χ©βΧ¨βΧ, indicating first to last for Latin and last to first in Hebrew. Is it better that the middle right paren A)A is the same
β
as this middle Χ©)Χ© left paren (they're both encoded as hex 29). With Aβ vs Χβ, Emacs does the Unicode thing... oh yeah, the mathematical aleph is different: I use β΅β
β
when I print the size of a countably infinite subset of β€.
β
β
The mathematical typeface variations of the latin alphabet are really great for presentation (I display β€ when I know Z represents the integers) but frustrate when
β
trying to make auditory distinctions.
β
β
So I (internally) use my own (still evolving) βZy Logical Character Setβ aka βZylchβ (the "zy" is vestigial, but still makes for entertaining acronyms) for formal
β
languages. It's a single byte encoding that starts with the digits 0β―9, followed by the 26 uppercase then 26 lowercase letters of English. Digits, including hex, are
β
represented with their own value! Then a bunch of symbols that change from week to week... Again, dinosaur brain, and it works for what I'm playing around with. [last
β
edited 2026-01-18]
β
β
ππΒ·
I updated my color text routines that I use in all my Python tools to proceess faster when read into Emacs. One day, I may create an IDE for my βneo-retroβ
β°
logic-puzzle computing environment, but Emacs is still the best tool for me.
1
π
ππΒ·
For my symbolic logic projects, I'm organizing my code around a division between the abstract for logic and concrete programmer-centric domains.
π΅
/
ππΒ·
The git βindexβ is like a little mini-commit zone. I backup my bare git repositories regularly but by incrementally diffing and reviewing changes using the index
β°
feels lighter weight than a commit, and not atrisk of being pushed or fetched by accident on my test machines.
π²
/
ππ Β·
I've been enjoying Python's tremendous flexibility but am converging, with support from a preprocessor, on a consistent style. Once things are stable enough to focus
β°
on performance, I'll use my preprocessor to target a mix of C or C++ and a more performant runtime than the standard Python interpreter.