…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

  • zbyte64@awful.systems
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    3 hours ago

    In my experience there are three ways to be successful with this tool:

    • write something that already exists so it doesn’t need to think
    • do all the thinking for it upfront (hello waterfall development)
    • work in very small iterations that doesn’t require any leaps of logic. Don’t reprompt when it gets something wrong, instead reshape the code so it can only get it right

    The issue with debugging is that it doesn’t actually think. LLMs pattern match to a chain of thought based on signals, not reasoning. For it to debug you need good signals in your code that explicitly tell what it is doing and the LLMs do not write code with that level of observability by default.

    Edit: one of my workflows that I had success with is as follows:

    • write a gherkin feature file describing desired functionality, maybe have the LLM create multiple scenarios after I defined one to copy from
    • tell the LLM to write tests using those feature files, does an okay job but needs help making tests run in parallel.
    • if the feature is simple, ask the LLM to make a plan and review it
    • if the feature is complex then stub out the implementation in code and add TODOs, then direct the LLM to plan. Giving explicit goals in the code itself reduces token consumption and yield better plans