I thought you meant the AI scientist paper has some obvious (e.g. methodological or code) flaws or errors. I find that thread unconvincing, but we’ve been over this.
It doesn’t demonstrate automation of the entire workflow—you have to, for instance, tell it which topic to think of ideas about and seed it with examples—and also, the automated reviewer rejected the autogenerated papers. (Which, considering how sycophantic they tend to be, really reflects very negatively on paper quality, IMO.)
I thought you meant the AI scientist paper has some obvious (e.g. methodological or code) flaws or errors. I find that thread unconvincing, but we’ve been over this.
It doesn’t demonstrate automation of the entire workflow—you have to, for instance, tell it which topic to think of ideas about and seed it with examples—and also, the automated reviewer rejected the autogenerated papers. (Which, considering how sycophantic they tend to be, really reflects very negatively on paper quality, IMO.)