It doesn't really describe the data well, but I'll just set the script running again (just made some code adjustments) and it'll take a number of minutes to finish, but the current "test status" echoed to the screen, if I pause it, is:
ABCCBCBBCAACBACBAABABCBBCAACCBABABABCC
(That's listed from the first permute of the queue to the latest valid one, the list grows and shrinks as it searches the 'field' and backtracks from dead-ends in the phase-space, and the interrelationships themselves aren't there in the printout as I'm not bothered wasting output cycles and character spaces with that whilst running. It's just a handy progress-meter for my benefit. Yes, there's a valid solution, at this level of enquiry, with just three elements... It's not the Four Colour problem...
)
The looser the constraint, the more scope the search
could cover (loads more partial solutions, that could 'die' before reaching out to the theoretical tip of any branch), but the more likely it'll find a final solution quicker. The tighter I make it, the dead-ends increase but so do entore dead-forks, which might shut off some old found solutions but that'w because they won't even get far into those old directions, so the screening will quickly hop over into other (more likely to be useful) branches from closer to the root of the respective decision-tree.
So the idea is to pre-bake the order of the nodes for full checking, so moving on/retreating is in a manner not just servant to anything like the incidental node-labels shuffled into alphanumerical order/whatever. Instead, I shall find the most connected node (rule 2, in my above post) knowing that any value I first give this reduces more future tree-branchings later on than if I'd started with a weakly-connected node. Then find any of the nodes that neighbour it (jointly all have the maximum possible value of one link to "all nodes ahead of them", as per rule 1) and select one of those with a similar high future potential of link-backs (rule 2 as a secondary concern). The third one I look at
might have links to both prior items (there may be several candidates for rule 1 showing double-connection to first and second item) and thus constrained the most. Or still only "1 past link" (either first-in-list
or second-in-list) is the most we have. Rinse and repeat.
If I used a bit more exploration, I could add a further (or even initial/intermediate) requirement that "nodes that have neighbours that link back to the currently established queue of nodes" or more, at some point in the slicing down of candidates for the next addition, but then I'm trading tree-branching in the future (the 'running through the queue that we establish') for tree-branching in the very establishing of said queue. As it stands, I'm checking N items, choosing one, checking N-1 items as possible candidates for onward travel, choosing one of those, etc[1], where testing N+(N-1) then (N-1)+(N-2), etc, for a first-order look-ahead (and I could go for second-order, third, even Nth, at least for the first decision where I could weight "best short-term answers to be found in a long term search" to establish which supposedly immediate decision is the absolute best) starts to look like more and more like doing the full tree-checking in a tree's worth of different ways just to make sure that I have the absolute best tree-checking strategy when it comes to the value-iteration stage! That way madness lies. Doable, but mad.
And if I check the script I just ran, again, it should by now have... Ah, sorry, I think my last tweak broke something slightly... Typical.
Ah, I see, a "." instead of a ","...That's easily corrected, at least. That's not your concern, of course.
[1] I could reduce this, a bit, but I'm not requiring my queue to be formed of a 'single traversible path' but could find that after the first item, and second item to which it must be linked, the actual third item 'best' links to the first but not the second, so I'd need to maintain "all nodes that are neighbours of enqueued nodes" as a subset of N that gets added to by new 'neighbours' (as old neighbours that are enqueued are removed) rather than just look for immediate neighbours of the last node I chose. At the moment, that would work (losing some of the advantages I'm looking for) But there are reasons why it might not, and the balance of effort to track the 'hyperneighbourisation' might even make later setups fail, so I'm accepting the marginal hit of testing N(N+1)/2 things from start to finish during the initial enqueuing phase.