What are our domains of expertise? A marketplace of insights and issues
We have recently obtained evidence that a number of people, some with quite interesting backgrounds and areas of expertise, find LessWrong an interesting read but find limited opportunities to contribute.
This post is an invitation to engage, in relative safety but just a little beyond saying “Hi, I’m a lurker”. Even that little is appreciated, to be sure, and it’s OK for anyone who feels the slightest bit intimidated to remain on the sidelines. However, I’m confident that most readers will find it quite easy to answer at least the first of the following questions:
What is your main domain of expertise? (Your profession, your area of study, or even a hobby!)
...and possibly these follow-ups:
What issues in your domain call most critically for sharp thinking?
What do you know that could be of interest to the LessWrong community?
What might you learn from experts in other domains that could be useful in yours?
Comments like the following, from the “Attention Lurkers” thread, suggest untapped resources:
I’m a Maternal-Fetal Medicine specialist. [...] I lurk because I feel that I’m too philosophically fuzzy for some of the discussions here. I do learn a great deal. Anytime anyone wants to discuss prenatal diagnosis and the ethical implications, let me know.
My own area of professional expertise is computer programming—perhaps one of the common sub-populations here. I’m also a parent, and have been a beneficiary of prenatal diagnosis (toxoplasmosis: turned out not to be a big deal, but it might have been). My curiosity is often engaged by what goes on “behind the scenes” of the professions I interact with as an ordinary citizen.
Yes, I would be quite interested in striking up a conversation about applying the tools discussed here to prenatal diagnosis; or in a conversation about which conceptual tools that I don’t know about yet turn out to be useful in dealing with the epistemic or ethical issues in prenatal diagnosis.
Metaphorically, the intent of this post is to provide a marketplace. We already have the “Where are we?” thread, which makes it easier for LessWrongers close to each other to meet up if they want to. (“Welcome to LessWrong” is the place to collect biographical information, but it specifically emphasizes the “rationalist” side of people, rather than their professional knowledge.)
In a similar spirit, please post a comment here offering (or requesting) domain-specific insights. My hunch is, we’ll find that even those of us in professions that don’t seem related to the topics covered here have more to contribute than they think; my hope is that this comment thread will be a valuable resource in the future.
A secondary intent of this post is to provide newcomers and lurkers with one more place where contributing can be expected to be safe from karma penalties—simply answer one of the questions that probably comes up most often when meeting strangers: “What do you do?”. :)
(P.S. If you’ve read this far and are disappointed with the absence of any jokes about “yet another fundamental question”, thank you for your attention, and please accept this apophasis as a consolation gift.)
- This is your brain on ambiguity by 28 May 2010 15:47 UTC; 28 points) (
- 6 May 2010 14:05 UTC; 2 points) 's comment on What is missing from rationality? by (
- 7 Jul 2010 15:50 UTC; 1 point) 's comment on Open Thread: July 2010 by (
I work as a structural engineer in the aerospace industry. My main task involves finding out how load is redistributed throughout structural elements (basically an issue of the relative stiffness of the different load paths) and then checking components against their various failure modes to determine how close they are to failing in the different conditions they’ll have to withstand. I also support related work, like determining accuracy of models based on flight data.
Rationality is related to my work in that determining how a model should be changed in light of flight test data (given the various sensors) is an elaborate application of Bayesian reasoning. Also, there is pressure to always “show the structure good”, presenting the danger of writing your conclusion before you begin reasoning.
As a side interest, I like to learn about AI and any topic that I believe will provide insight on it, such as biology/evopsych, thermodynamics, consciousness, and information theory (the latter of which I’ve been deemed an expert on by Steven Landsburg, somehow). Information theory in particular looks like a fruitful topic to study because of the unifying vision it gives to all the others, and one of my projects has been to convert constraints of physical law into constaints on information, and I think it has enhanced my understanding of various topics.
My blind spot is in chemistry, especially organic, which I think would be helpful to learn as a case study in how a system can maintain very specific order against the 2nd law pressure, so knowing the nuts and bolts of how it works at the molecular level would be very helpful to me.
Not that I’m complaining, but why so many upvotes on this one? I don’t think I answered the questions very directly. Is it because this was informative, or …? I’m curious.
I’m a games programmer, specializing in real-time 3D graphics. My degree is in Experimental Psychology and I maintain an amateur interest in that but don’t have up to date domain expertise there.
Performance is a major concern in graphics programming (we have quite a lot of work to do in 33ms). Optimization in graphics requires both a deep understanding of the algorithms, programming language and hardware you are working with but also the ability to step outside the immediate problem and see creative shortcuts or workarounds that achieve the desired high level end result your artists are asking for by some alternative means.
Familiarity with the latest research in the offline rendering world is valuable but just as important is the ability to have an understanding of what attributes of a scene are important to human perception and a willingness to throw mathematical or physical correctness out the window to get the visual results you are after.
I think working a problem like 3D rendering with hard time constraints makes me more aware than average of the importance of tradeoffs between accuracy and timeliness. A brilliant technique that doesn’t run in 1/30s on your target platform isn’t nearly as useful as a creative hack that does. I tend to think that the value of a fast approximation over a slow exact result is sometimes underappreciated in discussions of AI and human rationality.
In terms of specific technical knowledge, the games industry has been ahead of most of the mainstream software industry in coming to grips with massively parallel hardware in the form of GPUs and novel parallel processors like the Cell in the PS3. This is potentially relevant to areas of interest to Less Wrong that require large amounts of computing power. I won’t suggest AI specifically as the tendency to embrace shortcuts in graphics programming is perhaps not something that would be encouraged when creating an AI...
There are a lot of applications of compression in graphics. PCA was originally developed for statistical analysis and I believe has had some application in machine learning but is known in graphics as a compression technique. I suspect the connections between compression and learning are probably rather deep and profound.
Various somewhat esoteric bits of math have found their way into computer graphics. Quaternions are widely used as an efficient rotation representation and there is a fair amount of interest in geometric algebra. I imagine there are other useful areas of math that we could benefit from but that haven’t come to the attention of the field.
We ruthlessly exploit any and every ‘flaw’ in human perception to optimize 3D graphics. If there’s something that people tend not to notice we look for ways to avoid calculating it so we can use those cycles somewhere else. My Experimental Psychology background has been helpful in this regard.
Hi Matt,
I’m not a game programmer but always found it a fascinating subject. What language do you use for it? C++ I suppose?
Generally the majority of low level graphics code is written in C++, sometimes with smatterings of assembly to take advantage of SIMD hardware (or more likely nowadays compiler intrinsics which allow access to specialized processor features through C++ function calls and types). Increasingly I spend a lot of my time working with various shader languages that target GPU hardware.
Many games nowadays also use a higher level scripting language for gameplay (rather than engine) code. As a graphics programmer I usually spend less time working at that level than a gameplay programmer would but I usually end up dealing with some UnrealScript, lua or whatever the current project is using for scripting.
Although my focus is on the runtime side I also work on graphics related tools and pipelines on occasion (level editors, model viewers, data compression, data build pipelines, etc.). The technologies used vary but I’ve done a fair bit of C# and Python coding plus bits of things like MaxScript and javascript (for scripting Photoshop).
“What is your main domain of expertise?”
I’ll be starting grad school in math in the fall. I’m interested in the areas of applied math that deal with getting information from data—when it’s noisy, when it’s high-dimensional, when it’s uncertain. Way down the line, this is related to machine learning and image processing.
“What issues in your domain call most critically for sharp thinking?”
I think—and I’m getting this impression not only from my own opinion but from professors I admire—that we need to think more about philosophy of science and ask ourselves what, exactly, we’re doing.
For example, consider an interdisciplinary problem about interpreting some biological data with computational techniques. A computer scientist will tend to look for solutions that treat the data as arbitrary strings, and “throw away” any physical significance, which means that he can only prove modest claims. A biologist will tend to use techniques that assume a lot of a priori knowledge about the specific experiment, and thus aren’t generally applicable. Both extremes lack a nice quality of explanatory efficiency.
Especially when we’re looking at data, and models to explain data, we’re going to need to face the question “What makes an explanation good?” And my personal opinion is that philosophers of the LW variety can help.
“What do you know that could be of interest to the LessWrong community?”
Some mathematical concepts that I know are good metaphors or explanations for how we develop knowledge from data. Some are already familiar here (Bayesianism), and some possibly less so (PCA; diffusion geometry; entropy, in the information-theoretic sense; K-means clustering; random matrix theory.)
“What might you learn from other experts that might be useful in yours?”
Oh, god, where to begin. People who actually know how computers see/define/categorize would be invaluable (I’m only recently learning that I’m interested in questions posed in CS or EE, but I don’t have much of a prior programming background.) Philosophers know what questions to ask. Statisticians know a lot about updating knowledge. Physics is where all the examples come from, and I don’t know any physics. People in any line of work who have insights about how their own minds work can come up with ideas on how to make an algorithmic “mind” work.
Thank you for the invitation. I have lurked for some time but have recently written a few comments and intend to continue.
Expertize: I have worked in medical labs, research (genetic, biochemical, physiological, chemical) labs, computer support and analysis, lab management. I am now retired and on a pension. My hobby is neurobiology theorizing. I have been interested in this all my adult life because I am/was dyslexic.
Critical thinking domains in neurobiology: biological understanding of consciousness, memory, morals, communication.
What do I know: I can contribute a fair amount about how we think as opposed to how we think we think and I can bring a biological perspective to a blog that is heavy on the computer science, physics, economics and math areas. (I am also one of those rare females that do not react negatively to nerds.)
What can I learn: I have never read something by Eliezer and not felt I had learned something.
What’s your theory on dyslexia?
I believe there is more than one type (or cause) of dyslexia. I my case I have difficulty with phonic sounds. I can hear words clearly and I can speak orally with no problem whatsoever. But I cannot identify individual phonemes without great work and practice.
There are two theories that would fit my type of dyslexia. One is that the fast auditory path is not available to me and I must rely on the slow path. As phonemes are of very short duration I do not hear them clearly, but only hear the combinations of several phonemes. The other theory is that there is a lack of connectivity in a particular part of the auditory system and its connection with visual symbols. It is the connection across the brain’s midline that is probably at fault.
Although the literature seems to treat these as two separate theories among many, I tend to think in my case they are related. The way I overcame my inability to read and spell was (1) a kind teacher took my though 6 years of previous readers and spellers in my 7th year in school. He used phonetics (which was not used in my first years) and much practice. (2) I spent time learning the words I needed for exams—about equal to the time I took to learn the subject (3) I started learning the history of words as clues to remembering how they were spelt (4) my husband taught me to read in the Russian alphabet to learn sound-visual paris without hang ups with English spelling (5) I did cryptic crosswords and (6) I wrote and wrote and read and read. It felt from the inside like I was forcing a path around a barrier.
I still have problems and other symptoms of the conditions: not distinguishing between clockwise and counterclockwise, not hearing the sutile sounds of foreign languages and other English accents, occasionally a lag between knowing that something was said and hearing it, etc.
I should mention that I was never diagnosed because I am old enough that it was not a named condition when I was growing up. I believe it is inherited as I have relatives with similar problems who have been diagnosed, being young than me. I am left-handed and female. I believe that the stats show that is is rarer in females, but more common in lefties.
I hope this is the sort of information that you want. If you want something more scholarly, please visit my site http://janetsplace.charbonniers.org where there is information under the health pages on dyslexia.
If you show me a photo of a non-modular analog synthesizer manufactured between the years 1971-1995, there is a good chance that I can name it and tell you how many oscillators it has.
I’ve been meaning to post on this thread because my background is apparently somewhat unusual here and I am happy to try to pitch in on questions where I may have some knowledge and experience. I majored in medieval history in college, went to law school, and worked in corporate defense litigation for most of the past eight years. (I do see that one other lawyer and one recent law grad have posted in this thread.) Most recently I’ve worked as a Visiting Fellow at SIAI.
I’m also female, which is also apparently unusual here. I used to really enjoy arguing/debate, but at this point, I find that I generally prefer collaborative and cooperative discussion.
I followed the Amanda Knox and the Cameron Todd Willingham posts and comments with some interest, but haven’t yet contributed. I am also mulling over some possible open thread comments or top-level posts related to rationality and the law, but they’re still very much in the idea stages.
I am also a long-time science fiction fan, which is probably more common here, although it was not as common among the SIAI folk as I had expected.
I’m an occasional commenter, not a lurker, but I tend to stay on the fringes of conversations.
I’m an economist and policy analyst working for the New Zealand government. So I have expertise in the operation of (one specific) government, economics and the policy development process.
In policy work, bad thinking can cost millions or even billions of dollars, and destroy lives. The more important something is, the more important rationality becomes when thinking about that thing.
I had been toying with the idea of writing an article discussing how to think about evidence in policy (an environment where evidence is often rare and low quality) and why I think Bayesian Rationality is much more useful than Traditional Rationality in an evidence-poor environment.
I don’t really know, I read Less Wrong for entertainment as much as edification.
It seems to me that we have a surprisingly large community in NZ and AU. Do you have any ideas as to why that might be?
Also, I have long been interested in understanding the differences in social epistemology between what seem like unusually well governed developed nations like NZ and other developed nations. I’d really like to hear from you regarding such issues. You can reach me at michael.vassar@gmail.com
I’ve noticed there are a reasonable number of Kiwis and Aussies on the Internet generally, at least relative to our population, since our representative numbers would be 0.4% of the Internet population. Of course a lot of the difference there will be due to the fact that we’re an English-speaking country with common cultural roots the the US.
I’m not sure why you’d get a large number of Kiwis and Aussies here specifically though. We’re more secular than the US or UK (especially in New Zealand) and since I imagine there is a negative correlation between religiosity and the topics Less Wrong covers, that might be part of the reason. I also get the impression that we’re less deferential to authority than the rest of the Anglosphere, and I would also expect interest in Less Wrong to be positively correlated with contrarianism, at least along some axes of contrarianism.
I’ll send you an e-mail to talk about the other stuff.
I’ve heard from people who know academic philosophy that the trends in Australia and New Zealand are very different from the rest of the world.
The funny thing is, one night my (American) friends and I had an informal discussion about what country in the world was least likely to become a dictatorship. We decided it had to be New Zealand or maybe Australia. I’m not sure that we knew enough to be good judges, but it somehow made sense.
I had a similar opinion before meeting an Australian politician and getting a tour of the Queensland parliament.
I, for one, would be quite interested in reading about how you would apply rationality to public policy.
I may put together an outline for the next Open Thread / Discussion forum thing. Though, its worth pointing out that I’m talking about apply rationality to the process of developing policy options. Which policies get implemented is a matter of politics, which as Robin Hanson points out, is not about policy.
I’m a prosecutor. This requires… well, it’s helped greatly by knowledge of many different subjects, though primarily law.
Mostly, it’s just a general ability to absorb large amounts of information and put it together. Cross-examination requires very high-speed sharp thinking and some showtime skills.
I’m sure some would be interested in how criminal law is practiced, but that’s more than a post. If you’ve got specific questions, hey, I’m around.
This is longer.
Bayes’ Theorem applies to decayed DNA hits, so that’s useful. There’s a generally-accepted view that lawyers think differently, but I think there’s substantial cross-over from law-thinking to engineering-thinking, at least when it’s done right.
In many cases, there are issues that are less technical and more generally psychological; I have only a little special expertise, but I can tell you that the guy who reported his car stolen who is missing his pants had something happen that is not difficult to deduce.
There are many areas where I’m expert enough for my job, but far from expert-expert. I’m quite familiar with collision reconstruction, and my physics background is sufficient to understand the math. I resuscitated some aged accounting knowledge (I wouldn’t call it expertise, quite) for a trial a couple years ago.
I know very little biology or medicine; I have to talk to people if there’s a technical issue and I don’t have enough knowledge to see when something’s going wrong.
I don’t have any serious problem attending autopsies, or viewing unpleasant photos of violent outcomes, but I find medical stuff really—to use a technical legal term—icky.
--JRM
What is your main domain of expertise?
In Germany there is a study course called industrial management which is half engineering and half business administration. The intend is to train students to understand both and coordinate business processes and technical processes. That would be my main domain of expertise. But I’m also a programmer and system administrator.
What issues in your domain call most critically for sharp thinking?*
Business administration is not as simple as most managers seem to think. Instead a business administrator needs to understand what is going on and what results any taken measure will produce and what the constraints of any given methodology are.
What do you know that could be of interest to the LessWrong community?
I know economics, I understand how people interact inside organizations, I know some engineering.
I’m a postdoc working in a statistics group within a systems biology institute. In my PhD research I applied Bayesian statistics to the analysis of proteomics data. In terms of expertise, I accord myself journeyman status in statistical model formulation and in the implementation of MCMC algorithms to sample from a posterior distribution.
Most people commenting seem to be involved in science and technology (myself included), with a few in business. Are there any artists or people doing something entirely different out there?
To answer the main question, I am an MD/PhD student in neurobiology.
I draw and write, but only as a hobby. Does that count?
FYI, http://htht.elcenia.com/archive/161.shtml through http://htht.elcenia.com/archive/165.shtml are missing the comic images.
I have just finished degrees in law and philosophy, which I guess counts for some amount of expertise. I’m now studying both at graduate level. For the past three years or so, I’ve also been tutoring “traditional rationality” courses in philosophy. Before that, I was a high-school math tutor for about 8 years. Overall, I’d rate my teaching skills as somewhat higher-level than the areas I’ve studied formally.
I’m a bit of a generalist—I’ve also studied chinese, human biology, and mathematics.
Legal and philosophical reasoning often deal with vaguely defined folk-concepts such as causation. Fitting these into good reasoning often means finding precise criteria to look for in the observable world.
Lawyers are also prone to emphasise persuasion, without necessarily being rational about it. Being trained to see both sides of an issue means we risk a failure to conclude based on evidence. It’s easier to say “each side has arguments”.
I know that a few simple ideas in argumentation theory can be quite powerful in augmenting people’s rationality. The course I’ve taught most often focuses on evaluating arguments, by performing two tasks: firstly assessing the “logical strength” of inferences, and secondly assessing the “plausibility of premises”.
I also know that many students struggle to understand this simple rationalist tool. I suspect most who do understand it may not apply it outside the domain of passing-philosophy-exams.
I’d like to know more about mathematical descriptions of reasoning. Statistical and probabilistic reasoning can be important in assessing legal liability, and I’m far from expert at this. Also, Bayesians seem to identify problems with “statistical significance” based on arbitrary p-value thresholds. Understanding this seems important to developing a rational philosophy of science, both academically and for myself.
This sounds like an extremely useful skill set. I think that we desperately need a better understanding of how people typically learn, learn to apply, and fail to apply traditional rationality if we are ever to expand this community beyond the thousands into the tens of thousands of participants without greatly diluting quality. I would GREATLY appreciate help on this topic. If interested, please email me. michael.vassar@gmail.com
Application and system administration of ERP software and computer operating systems.
Decision making and business process formulation.
Most problems are not technical in nature; they are problems with the people and the processes. This could mean you need to change the culture, you need to train people, you need to route things differently, or gain a better understanding of the workflow. Too many people say, “Let’s implement a piece of software and that will make everything better.” No, it won’t. Stop listening to the marketers and sales people and start analyzing your processes.
Hear, hear. :)
Excellent idea for a thread.
I’m a moderately frequent commenter and very occasional poster here, reading since the beginning.
In the 1970s I studied pure mathematics as an undergraduate and theoretical computer science as a postgrad, and then worked in the latter field for some years. However, I now regard all the work I did there as a waste of time, despite the citation counts, and since around 1996 began to move into more practical areas.
I got interested in control theory and the particular subfield that has been the subject of most of my top-level posts here (intro), and have developed in physical simulation a walking robot based on those principles.
I’ve also developed software for script-driven real time procedural animation of human upper body movement as part of a project for animating deaf sign language. I think it works pretty well, and I’d love to extend it to more general animation for the obvious application area, video games—with all respect to the deaf, animated sign language is very much a niche application. I haven’t had time to seriously work on this, since what actually pays my salary at present is developing finite element methods and software for simulating biological organ growth.
I also wrote the article on artificial languages for the 3rd edition of the Routledge Linguistics Encyclopedia, and was the Klingon consultant for series 7 of “Big Brother”. ghImlu’meH QaQ jaj vaghdich!
I’m currently studying stochastic calculus with a view to figuring out just what, if anything, classical control theory and Bayesian reasoning have to do with each other.
If this sounds like a resume touting for work, it’s intended to. :-)
What a nice thread idea. I’m a first year grad student in “engineering physics”. My research focus is plasma physics and magnetic confinement fusion, but right now I’m still mainly trudging through classes. My main expertise is in classical physics, mainly fluid mechanics, electrodynamics, and applied math.
In problem solving in our domain there is a need for sharp thinking, but I tend to think that this is actually just an illusion, and the really hard part comes outside of the actual calculations. In doing a calculation you have to kind of figure out what’s going on, but usually if you know all the assumptions and the correct methods, this is just a matter of working out the details. The hard part is understanding what assumptions are valid, and knowing what to do when you get stuck.
General meta-level thoughts on math and science, mainly. These are philosophical and practical, in the sense of “Why are math and physics so connected?”, or “Why is someone good at math?” or “What is the essential difficulty in doing this physics?” These are all questions I’ve thought about quite a bit in the course of my education.
I think there is stuff to learn in the basic meta-level philosophical and logical techniques that come into play in the forum, for example, when people break down other arguments or address flaws in reasoning. I wish I had more practice in things like basic logic and argumentation (e.g., the different types of argument.) It’s very useful for physics and especially math, which relies on the fidelity of your reasoning process.
Ah, someone to ask about this. Next year I will be an undergraduate engineering physics major at UC Berkeley. How do you find the major to be in terms of interestingness of the material? I notice you are doing a graduate degree, would you see that as necessary or are there employment opportunities you passed up?
First of all, congrats on your admission.
I do plasma physics, but other people in EP do nuclear engineering, mechanics, astronautics, and so on, but yes I find it to be very interesting. There are some subjects like nuclear physics which sound neat but then you study it a little bit and find out it can be sort of boring. Anyways, it all depends on your instructor, course style, personality, and so on.
From my undergrad experience it seems most people who do engineering are basically set for employment right out of undergrad, even for something more unusual like EP. I’m doing grad school more to try and get into research.
I’m not a lurker, but a regular commenter (and a ‘top contributor’ for the site’s entire first year), but I’m not sure I’ve filled this in elsewhere...
Short list: computer ethics, web development, generalist philosopher
I’m not sure if I have anything I’d like to call ‘expertise’, but I’m about as well-versed as anyone on the computer ethics literature and I’ve a decent grasp of normative ethics and philosophical logic (though I’ve a growing distaste for the style of academic philosophy). When I’m not working on my dissertation, I’m doing web development as a day job, and side academic interests are the web, video games, new media, and transhumanism. In truth, I’m more of a generalist, so I (hopefully) can produce insights that insiders in a field would miss.
My (hopefully) forthcoming doctorate is in computer ethics; my dissertation is on how to build ethical robots. My undergraduate degree is in Philosophy, where I focused on ethics and computer ethics, and minored in asian studies and computer science.
I’ve been called a “serious academic” and put in the category “leading futurists and technologists”, though I don’t know which I find more dubious. And I don’t think “one of the world’s foremost Yudkowsky scholars” would be much of a stretch.
That’s an odd question. I’m not sure if that’s equivalent to “What issues in your domain are most critical” or “What issues in your domain are most likely to be corrupted by standard human failures of rationality” or some combination of the two.
The big problem that brought about the field of computer ethics is: New technology creates situations that our existing institutions (and intuitions) aren’t built for, and so what should we do about it?
Being something of a generalist, I’m always hoping to be backed up by someone with more domain-specific knowledge.
I’m an assistant professor of mathematics at the University of Wyoming. I specialize in discrete geometry, especially geometry and combinatorics of polyhedra. Maelin already gave a good account of the way in which mathematics calls upon (or does not call upon) rationality.
I also have a pretty good amateur’s familiarity with contemporary philosophy in the analytic tradition and with the philosophy of mathematics.
Main area of expertise? I have completed a degree in Science with an extra Honours year, majoring in pure mathematics (especially abstract algebra) and with a hefty side serving of formal logic and computer programming as well. I arrived at mathematics after planning to study physics, leaving high school, finding physics to be terribly taught and full of things that I now realise at the time I lacked the mental sophistication for, and realising I found the maths more interesting.
Last year and this year have been dedicated first to earning money, now to travelling around Europe, and soon to earning money again. Next year I will begin studying teaching with the intent of becoming a high school maths teacher. I’ve spent a few years tutoring in mathematics and also worked part-time as a teacher’s aide for a semester last year.
What demands sharp thinking? Sharp thinking is pretty vital in pure mathematics but it’s probably not something that needs to be consciously practised so much, owing to the already rigid structure of mathematics. Whilst I’m confident that rationality is valuable in teaching, I’m not entirely sure I could verbalise exactly how. I have a talent for explaining things and I enjoy explaining mathematics, which is the main reason I’m going into teaching. I can’t think of any examples of ways that being a sharp rationalist will be -directly- helpful although some may exist.
I am a regular commenter, evidently, but I am also a mechanical engineering grad student with a focus on thermal management of electronic packaging. Most of what I do ranges from applied Newtonian physics to empirical modeling.
I am also a webcomics nerd—I enjoy the medium, and as a consequence have learned a few things both about fandom communities and about the business of webcomicking.
My main domain of expertise is quantum information theory, encompassing quantum physics, but focusing on what types of information processing this makes both possible, and efficient, for various values of efficient.
I’m also well-schooled in a variety of math, and have experience in programming, Unix systems administration, and asynchronous hardware design.
EDIT: My credentials: I have undergraduate degree in physics from Caltech, and I’m currently working on my PhD thesis (ABD) at UNM.
Any expertise in overcoming physical disabilities to communicate/interface with the world more effectively?
Not personally, but I know somebody who’s good at that kind of thing. PM me specifics and I’ll pass them along?
Thanks! I’ll PM you a letter with the specifics.
Well, not a complete lurker, posted a couple of thoughts following the rabbinical dictum of “the bashful shall not learn”—I have sizable if somewhat weird jewish orthodox background. I guess that is a domain of expertise by itself.
More recently I made a career switch from general programming to reporting to data mining types of stuff (this was a planned transition over 3 years). Still way behind on many mathy things but know enough to be useful to my employer. Learned pretty much everything myself, though I have gone to a low-quality college while being a “non-degree rabbinical student” (a yeshiva bachur not aiming for smicha in the lingo).
I don’t have any rational superpower yet, but search for truth and some fortuitous accidents brought me thus far, so who knows ;)
I’m an applied mathematician. I am finishing my PhD on natural language processing. I worked on many industry projects, some relevant keywords are data mining, text mining, network analysis, speech recognition, and optimization on spatial data.
Years ago I learned computational complexity theory. Occasionally I still teach it at my old university. After the PhD, my next plan is to finish my problem book on computational complexity. The computational complexity approach to problems influences everything I do.
I love my work, but frankly, I don’t come to LW to talk about data mining. I believe my hobbies and side projects are more relevant and interesting here. I have two hobby projects that should really be tackled by physicists. I hope I don’t deviate too much from the spirit of this thread if I answer the “What might you learn from experts in other domains that could be useful in yours?” by introducing my projects as questions to physicists:
Let us assume that you an all-powerful optimization process, and your goal is to finish an extremely long computation (say, a search for a very large Hamiltonian cycle) in the shortest possible amount of time. You have millions of galaxies to turn into computronium. What is the optimal expansion speed of your computer, considering our current understanding of particle physics, general relativity and thermodynamics?
Of all the possible Universes in Tegmark’s level IV Multiverse, most don’t even have a concept of Time. How can we decide whether or not a specific Universe is in fact a Space-Time Continuum?
I see no reason to go slow on the expansion except for the possibility of hostile opposition. If you do intend to expand, you have nothing to gain computationally by delaying.
You have to build time into your metaphysics from the beginning. If you restrict your studies to arithmetic, category theory, or any study of static timeless entities, you won’t get time back for free. In general relativity, to have a timelike direction, your metric must have a certain signature. So perhaps we can say that the “mathematical object”, 4-manifold with a metric whose signature is +++- everywhere, describes a universe with time. But the mathematical object itself does not intrinsically contain time.
This is a significant flaw in Tegmark’s scheme, as he describes it, as well as all belief systems of the form “reality is mathematics”: mathematics is not the full ontology of the world. Time might be the least disputable illustration of this. Time is something you can describe mathematically, but it is not itself mathematics in the way that numbers are.
Let’s consider a classic example of an ontology, Aristotle’s. I mention it not to endorse it but simply to present an example of what an ontology looks like. According to Aristotle’s ontology, everything that is belongs to one of ten classes of entity—substance, quantity, quality, relation, place, time, position, state, action, affection—and these entities connect to each other in specific ways, e.g. substances have qualities.
General ontological theory talks about these ten categories and their relationships. All lesser forms of knowledge are “regional ontologies”, they only talk about subtypes of being, or perhaps beings which are built up from elementary types in some way.
Now Pythagoreans supposedly believed that all is number. If we transpose that slogan into Aristotle’s categories, what is it saying? It’s saying that the category of quantity is the only real one and the only one we need to study. Obviously an Aristotelian would reject this view. Quantity is not only to be studied in itself, but in its relations to the other categories.
Tegmark, and all other mathematical neoplatonists, are doing the same thing as the Pythagoreans. Modern mathematics is part of the full ontology, but only part of it. Because we know how to reason rigorously and with clarity about mathematical objects, and because we can represent so much of reality mathematically, there is apparently a temptation to view mathematics as reality itself. But this requires you to ignore the representation relation—what exactly is going on there? It’s not as if anyone has a very convincing account of how things and their properties are fused into one. But to adopt mathematical neoplatonism guarantees that you will be unable to think straight about such issues. With respect to time, for example, you will inevitably end up skipping back and forth between rigorous discussion of the properties of semi-Riemannian metrics, and then vague or even bogus assertions about how the metric “is time” and so on. This vagueness is a symptom of a problem overlooked, namely, what is the larger ontology of which mathematics is just a subset, and how do the categories of mathematics relate to the more physical categories of reality, like time and substance?
There’s no reason why you can’t have a systematic multiverse theory based on a richer ontology, one with physical categories as well as mathematical. But you would have to figure out the outlines of that richer ontology first.
I am not sure. The laws of thermodynamics may interfere. Do you suggest that the optimal expansion speed I am looking for is equal to the speed of light? Do you know how to build a computer that expands with exactly the speed of light?
If you do, then I am very interested, because that makes a pet theory of mine work: I seriously believe that the expansion speed of (expanding) civilizations goes through a fast phase transition from 0 to c. I half-seriously believe that this is the proper explanation of the Fermi Paradox: we can’t observe other civilizations because they are approaching us with the speed of light. (And when finally we could observe them, they already turned us into computronium.)
It is quite possible that we are using the same terms in some very different sense. But if accidentally we are really talking about the same things, then I think you are wrong.
I believe that time is an emergent phenomenon, and it is emerging from the more basic notion of memory. Of all the many arrows of time physicists and philosophers like to talk about, the thermodynamic arrow of time is the only basic one, And it is, in turn, just some averaging of the many local arrows defined by information-retrieval processes. Luckily for us, in our Universe, these processes typically are in sync. That’s why we can talk about time the way we can.
I agree (NB: also computer scientist, not physicist) with the premise that civilizations probably expand at near-c, but there’s a problem with this. Since it seems that intelligent life like us could have arisen billions of years ago, if life is common and this is the explanation for the Fermi Paradox, we should be very surprised to observe ourselves existing so late.
You are right. The argument is not compatible with the possibility that life is very common, and this makes it much less interesting as an argument for life not being very rare. But it is not totally superfluous: We can observe the past of a 46 billion light years radius sphere of the expanding, 14 billion light years old Universe. Let us now assume that 4 billion years since the Big Bang is somehow really-really necessary for a maximally expanding civilization to evolve. In this case, my whole Fermi Paradox argument is still compatible with hundreds of such civilizations in the future of some of the stars we can currently observe. (You can drop hundreds of 10ly spheres into a 46ly sphere before starting to be very surprised that the center is uncovered.)
But you are right. I think my Fermi Paradox argument is an exciting thought experiment, but it does not add too much actual value. (I believe it deserves a sensationalist report in New Scientist, at least. :) ) On the other hand, I am much more convinced about the expansion speed phase transition conjecture. And I am very convinced that my original question regarding optimally efficient computational processes is a valuable research subject.
You’re right, and I hadn’t really thought that through — I had thought that this argument ruled out alien intelligence much more strongly than it does. Thanks.
Glad I could help. :) You know, I am quite proud of this set of arguments, and when I registered on LW, it was because I had three concrete ideas for a top-level post, and one of those was this one. But since then, I became somewhat discouraged about it, because I observed that mentioning this idea in the comments didn’t really earn me karma. (So far, it did all in all about as much as my two extremely unimportant remarks here today.) I am now quite sure that if I actually wrote that top-level post, it would just sit there, unread. Do you think it is worth bothering with it? Do you have any advice how to reach my audience with it, here on LW? Thanks for any advice!
No, because if there is something like a Gaussian distribution of the emergence times of intelligent civilizations, we could just be one of the civilizations on the tail.
Exactly. The argument is that, since being on the tail of a Gaussian distribution is a priori unlikely, our age + no observation of past civilizations is anthropic evidence that life isn’t too common.
We have no idea what the Gaussian distribution looks like. We don’t necessarily have to be on the tail, just somewhere say one sigma away. No observation of civilizations just corresponds to us being younger than average and the other civilizations being far away. Or we could be older and the other civilizations just haven’t formed yet. But none of this can imply whether life is uncommon or common.
No. I have nothing more exotic to suggest than a spherical expansion of ultrarelativistic constructor fleets, building Matrioshka-brains that communicate electromagnetically. All I’m saying is, if you think you have an unbounded demand for computation, I see no computational reason to expand at anything less than the maximum speed.
How do two such civilizations react when they collide?
Is “memory” a mathematical concept? We are talking about Tegmark’s theory, right? Anyway, you go on to say
and the moment you talk about “processes”, you have implicitly reintroduced the concept of time.
So you’re doing several things wrong at once.
1) You talk about process as if that was a concept distinct from and more fundamental than the concept of time, when in fact it’s the other way around.
2) You hope to derive time from memory. I see two ways that can work out, neither satisfactory. Either you talk about memory processes and we are back to the previous problem of presupposing time; or you adopt an explicitly timeless physical ontology, like Julian Barbour, and say you’re accounting for the appearance of time or the illusion of time. Are you prepared to do that—to say simply that time is not real? I’ll still disagree with you, but your position will be a little more consistent.
3) Finally, this started out in Tegmark’s multiverse. But if we are sticking to purely mathematical concepts, there is neither a notion of memory or of process in such an ontology. Tell me where time or memory is in the ZFC universe of sets, for example! The root of the problem again is the neglect of representation. We use these mathematical objects to represent process, mental states, physical states and so forth, and then careless or unwary thinkers simply equivocate between the mathematics and the thing represented.
I agree. That’s why I was careful to ask the advice of physicists and not computer scientists. I am a computer scientist myself.
I don’t know. But these cyclic cellular automata were an influence when I was thinking about these ideas. http://www.permadi.com/java/cautom/index.html (Java applet)
Your critique is misdirected. If I, a time-based creature, write a long paragraph about a timeless theory, it is not surpising that accidentally I will use some time-based notion in the text somewhere. But this is not a problem with the theory, this is a problem with my text. You jumped on the word ‘process’, but if I write ‘pattern’ instead, then you will have much less to nitpick about.
Little more consistent then the position you put into my mouth after reading one paragraph? This is unfair and a bit rude. (Especially considering the thread we are still on. I came here for some feel-good karma and expert advice from physicists, and I was used as a straw man instead. :) Should we switch to Open Thread, BTW?)
To answer the question: yes, I am all the way down route number 2. Barbour has it exactly right in my opinion, except for one rhetorical point: it is just marketing talk to interpret these ideas as “time is not real”. Time is very real, and an emergent notion. Living organisms are real, even if we can reduce biology to chemistry.
Please read my answer to ata. I’m not a platonist. I don’t do such an equivocation. I am a staunch formalist. I don’t BELIEVE in Tegmark’s Multiverse in the way you think I do. It is a tool for me to think more clearly about why OUR Universe is the way it is.
Continued here.
I think this argument has the same logic as the Doomsday Argument, and therefore is subject to the same counterarguments (see SIA and UDT). I’ll explain the analogy below:
In the DA, the fact that I have a low birth rank is explained by a future doom, which makes it more likely for me to observe a low birth rank by preventing people with high birth ranks from coming into existence.
In your argument, the fact that we are outside the lightcones of every alien civilization is explained by the idea that they expand at light speed and destroy those who would otherwise observe being in the lightcone of an alien civilization.
I am afraid the analogy is not clear enough for me to apply it, and explicitly reproduce the relevant version of the counterarguments you are implying. I would be thankful if you elaborated.
In the meanwhile, let me note that the Doomsday argument floats in an intellectual vacuum, while my proposed 0-1 law for expansion speed could in principle be a proven theorem of economics, sociology, computer science or some other field of science, instead of being the wild speculation what it is. My goal to understand the physics of optimally efficient computational processes is motivated by exactly this: I wish to prove the 0-1 law, from still very speculative and shaky, but at least more basic assumptions.
I see, your proposed argument isn’t directly analogous to the standard Doomsday Argument, but more like a (hypothetical) variant that gives a number of non-anthropic reasons for expecting doom in the near future, and also says “BTW, a near future doom would explain why we have low birth rank.”
I’m not sure that such anthropic explanations make sense, but if you’re not mainly depending on anthropic reasoning to make your case, then the counterarguments aren’t so important.
BTW, I agree it is likely that alien civilizations would expand at near the speed of light, but not necessarily to finish some computation as quickly as possible. (Once you’re immortal, it’s not clear why speed matters.) Another reason is that because the universe itself is expanding, the slower those civilizations expand, the less mass/energy they will eventually have access to.
I’m not remotely a physicist, but I have a few comments, which I will do my best to confine to the limits imposed by my knowledge of my lack of knowledge.
By “optimal expansion speed”, do you mean “maximum possible expansion speed given particle physics, general relativity and thermodynamics (according to our current understanding thereof)”, or do you see some reason that a slower expansion would be beneficial to the ultimate goal (or is that the question)?
(Meanwhile, I’ll just say that I’d first want to either prove P≠NP or find a polynomial-time algorithm for the Hamiltonian cycle problem. P=NP may be unlikely, but if I were an all-powerful optimization process, I’d probably want to get that out of the way before brute-forcing an NP-complete problem. Might save a few million galaxies that way. Then again, an all-powerful optimization process would very likely have a better idea than this puny human.)
I’m not sure if something without a timelike dimension would really qualify as a universe. It’s really just a matter of definition, but since every mathematical structure has the same ontological status in Tegmark IV, including, say, the set {1, 2, 3}, a useful definition of “universe” will have to be more narrow than “every element of the Level IV Multiverse” and more broad than “every structure that can result from the laws of this universe”.
I’m not sure if we’d be able to rigorously define what structures count as “universes” (and it’s not terribly important, being that our definition of the word doesn’t impinge on reality anyway), but intuitively, what properties are necessary for a structure to seem like a universe to you in the first place? I’d be pretty flexible with it, but I think I’d require it to have something timelike, some way for conditions to dynamically evolve over at least one dimension.
Avoiding heat death may be beneficial, for example. As I wrote to Mitchell Porter, to me, the most interesting special case of the question is: if you want to build the fastestest computer in the Universe, should it expand with the speed of light? I’m really not a physicist, so I don’t even know the answer to a very simple version of this question, a version that any particle physicist should be able to answer: Is it possible for some nontrivial information processing system to spread with exactly the speed of light? If not, what about expansion speed converging to c?
You (and Mitchell Porter) are completely right . At this point, I don’t have a convincing answer to your obvious question. In the meantime, Tegmark level IV is a good enough answer for me. (Note to Mitchell: it would be very hard to find someone less platonist than me. And I find Tegmark’s focus on computability totally misdirected, so in this sense I am not an intuitionist either.)
I think we disagree here. Please see my answer to Mitchell about the emergence of time from the more basic concept of memory.
I’m not a physicist, but my understanding is that it is not possible for an information processing system to expand at or arbitrarily close to the speed of light; if we neglect the time taken for other activities such as mining and manufacturing, the most obvious limit is the speed to which you can accelerate colony ships (which have to be massive enough to not be fried by collision with interstellar hydrogen atoms). The studies I’ve seen suggest that a few percent of lightspeed is doable given moderate assumptions, a few tens of percent doable if we can get closer to ultimate physical limits, 90%+ doable under extreme assumptions, 99%+ not plausible and 99.9%+ flat-out impossible without some kind of unobtainium.
On the question of ontology, I’m a card-carrying neoplatonist, so you’ve probably heard my position from other people before :-)
Having read further down and under the context of the Fermi problem, I think the idea is that the general limitations (on the first question) are more due to engineering than due to particle physics, relativity, and so on. Allow me to explain.
Relativity sets a limit on information propagation at the speed of light. More specifically, in physics they talk about waves having a phase velocity (which can be arbitrarily large) and a group velocity. The group velocity refers to the information carrying content of the wave, and this speed is always strictly limited to less than c. Since things like light waves are the fastest means of communication, this sets your upper bound according to the current state of the art.
But in reality, if you were expanding outwards from a point in space, you would need more than just a light wave. If you were a person, you would need a ship. In a more imaginative case, even if they could decode your body into pure information content and then broadcast that signal at light speed across the universe, it would be good for nothing unless you had something that could reconstruct you on the other side. But I’m guessing you’d already thought about this.
In a ship then, accelerating to high speeds in a straight line for a macroscopic object like a spaceship isn’t that hard, theoretically. The hard part would be in things like maneuvering and radiation shielding. It takes a lot of energy to turn your trajectory if you’re a massive object moving at significant fractions of c. I haven’t calculated it but that’s just my intuition.
If you did something like accelerating on straight line trajectories (or geodesics, or what have you) and decelerating in straight lines, going from point to point around obstacles, then you obviously accumulate more delay time in a different fashion. In any case, there’s these engineering difficulties that are practical issues to expansion.
Maybe you’d rather imagine cellular automata or some kind of machine construction proceeding outwards at light speed, rather than more conventional ideas. In this case, it would be something radically different from what currently exists, so one can only speculate. The hard part might be the following. If you are building yourself outwards at the speed c, then you are also colliding with whatever is in front of you at speed c, and this would most likely cause your total destruction.
This is of course assuming that the machine would require a conventional structure in the form of nanorobots, gelatin, or basically anything with nontrivial mass distributions.
You might instead try to compromise in the following way. Namely, you expand at some high speed, say 0.5 c, where you can still shoot (e.g.) space nukes out in front of you at another 0.5 c, to attempt to vaporize or sufficiently distillate the obstacles before you hit them. And so on and so forth.…
sociology phd student. too much of a generalist for my own good. compared to the people at this site my comparative advantage is probably my understanding of public policy (especially education policy), economics, survey design and analysis, applied statistics, and of course sociology.
Cryptography and sexual politics.
I’m a prosecutor. This requires… well, it’s helped greatly by knowledge of many different subjects, though primarily law.
Mostly, it’s just a general ability to absorb large amounts of information and put it together. Cross-examination requires very high-speed sharp thinking and some showtime skills.
I’m sure some would be interested in how criminal law is practiced, but that’s more than a post. If you’ve got specific questions, hey, I’m around.
This is longer.
Bayes’ Theorem applies to decayed DNA hits, so that’s useful. There’s a generally-accepted view that lawyers think differently, but I think there’s substantial cross-over from law-thinking to engineering-thinking, at least when it’s done right.
In many cases, there are issues that are less technical and more generally psychological; I have only a little special expertise, but I can tell you that the guy who reported his car stolen who is missing his pants had something happen that is not difficult to deduce.
There are many areas where I’m expert enough for my job, but far from expert-expert. I’m quite familiar with collision reconstruction, and my physics background is sufficient to understand the math. I resuscitated some aged accounting knowledge (I wouldn’t call it expertise, quite) for a trial a couple years ago.
I know very little biology or medicine; I have to talk to people if there’s a technical issue and I don’t have enough knowledge to see when something’s going wrong.
I don’t have any serious problem attending autopsies, or viewing unpleasant photos of violent outcomes, but I find medical stuff really—to use a technical legal term—icky.
--JRM
I’ve amassed quite a bit of knowledge about conspiracy theories and geopolitics. Regarding 9/11 I think I have a fairly good understanding of what went on behind the scenes.