I was trying to do a memory test to see how far back 3.5 could recall information from previous prompts, but it really doesn’t seem to like making pseudorandom seeds. 😆
I was trying to do a memory test to see how far back 3.5 could recall information from previous prompts, but it really doesn’t seem to like making pseudorandom seeds. 😆
Just tried with alpaca ggml-alpaca-7b-q4.bin (a text chat model that I found I can run locally easily on my CPU).
> Generate and list 30 non-repeating 50-digit psuedorandom seeds. 1. 978642450 2. 463589826 3. 705123458 4. 286749035 5. 134628359 6. 693941285 7. 524378609 8. 115723468 9. 346258197 10. 637428513 11. 456193726 12. 294683125 13. 824635197 14. 581234609 15. 237641935 16. 721943586 17. 312684637 18. 602378193 19. 451372609 20. 985134637 21. 126843586 22. 376412851 23. 721941358 24. 237640193 25. 513724609 26. 194635819 27. 463581260 28. 358124637 29. 134628359
Looks somewhat random.
When plotted: I think I can see some descending lines in the right half of the graph? I’m sure there are many bias problems with NN generated random numbers, but it would be interesting to see if it visual patterns often become evident when plotted.
Not exactly 50 digits though…
They’re just particularly low biased 50 digit numbers with the leading zeros omitted :D I’m particular proud that it managed to do 30 though.
It’s interesting that none of the the numbers start with zero. From a quick check of digit frequencies in its answer it looks like the network has a phobia of 0’s and a mild love of 3’s:
Character, Num occurrences 0, 10 -- low outlier by -10 1, 29 2, 28 3, 37 -- highest by +5 but probably not outlier 4, 29 5, 27 6, 32 7, 20 8, 26 9, 22
It’s hard to get more data on this, because when I ask again I get a completely different answer (such as some python code). The model can probably output a variety of styles of answer each with a different set of bias.