Lets see how the predictions made by your model hold up!
Name three examples. Or even one to start. Using a template like:
Short and long description of the phenomenon being analyzed.
A review of the existing literature on the topic (including academic, of course).
A list of models which endeavor to explain it.
For each model, how it explains the phenomenon, what other known phenomena are covered, which of them are in agreement with the model, and which are in contradiction, and why.
A list of predictions each model makes, including the analysis of potential uncertainties.
Analysis of potential experiments which can check the predictions and possibly differentiate between models.
Review of the literature describing these or similar experiments done thus far.
When applicable (and it should be nearly always), suggested computer simulation of each potential experiment and expected outcomes (software is generally cheaper than wetware).
After that, the simulated data should be analyzed, models, their predictions and suggested experiments adjusted and the simulation repeated until it produces satisfactory results (what is satisfactory?). One can potentially discover that the proposed experiments have been done already in vivo and compare them with the results in simu.
Only after all this preliminary work is done, it makes sense to actually start doing live experiments.
In reality (= in academia), many of the steps above, especially the simulation, are skipped or short-circuited (experimental cognitive science is traditionally less precise than, say, physics), but there is no good reason they should be. As a bonus, any research done according to a template like that should have little trouble getting peer-reviewed and published.
Name three examples. Or even one to start. Using a template like:
Short and long description of the phenomenon being analyzed.
A review of the existing literature on the topic (including academic, of course).
A list of models which endeavor to explain it.
For each model, how it explains the phenomenon, what other known phenomena are covered, which of them are in agreement with the model, and which are in contradiction, and why.
A list of predictions each model makes, including the analysis of potential uncertainties.
Analysis of potential experiments which can check the predictions and possibly differentiate between models.
Review of the literature describing these or similar experiments done thus far.
When applicable (and it should be nearly always), suggested computer simulation of each potential experiment and expected outcomes (software is generally cheaper than wetware).
After that, the simulated data should be analyzed, models, their predictions and suggested experiments adjusted and the simulation repeated until it produces satisfactory results (what is satisfactory?). One can potentially discover that the proposed experiments have been done already in vivo and compare them with the results in simu.
Only after all this preliminary work is done, it makes sense to actually start doing live experiments.
In reality (= in academia), many of the steps above, especially the simulation, are skipped or short-circuited (experimental cognitive science is traditionally less precise than, say, physics), but there is no good reason they should be. As a bonus, any research done according to a template like that should have little trouble getting peer-reviewed and published.
Again, I would love to see at least one example.