• marius@feddit.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    The lottery ticket hypothesis crystallised: large networks succeed not by learning complex solutions, but by providing more opportunities to find simple ones.

    Wouldn’t training a lot of small networks work as well then?

    • mindbleach@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      29 days ago

      Quite possibly, yes. But how much is “a lot?” A wide network acts like many permutations.

      Probing the space with small networks and brief training sounds faster, but that too is recreated in large networks. They’ll train for a bit, mark any weights near zero, reset, and zero those out.

      What training many small networks would be good for is experimentation. Super deep and narrow, just five big dumb layers, fewer steps with more heads, that kind of thing. Maybe get wild and ask a question besides “what’s the next symbol.”