LughMtoFuturology@futurology.todayEnglish·6 months agoEvidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.(arxiv.org)external-linkarrow-up1299arrow-down127message-square67fedilink
arrow-up1272arrow-down1external-linkEvidence is growing that LLMs will never be the route to AGI. They are consuming exponentially increasing energy, to deliver only linear improvements in performance.(arxiv.org)LughMtoFuturology@futurology.todayEnglish·6 months agomessage-square67fedilink
minus-squareGeneral_EffortEnglisharrow-up1arrow-down0·6 months agolinkfedilink In theory there’s an inflection point at which models become sophisticated enough that they can self-sustain with generating training data to recursively improve That sounds surprising. Do you have a source?
That sounds surprising. Do you have a source?