Not particularly urgent. An understanding of how to update priors (which you can get a good deal of with an intro to stats and probability class) doesn’t help dramatically with the real problem of having good priors and correctly evaluating evidence.
Punoxysm
The big pizza delivery chains have expanded their menus over time, from pretty-much-pizza to sandwiches, desserts, wings, salads, etc.
So it seems efficiency is not the driver.
I think you have failed to consider the requirements of Kickstarter. You need to really think about your marketing, deliver as clear and slick a statement of your project as possible, and imitate as much as you can of successful similar kickstarter projects.
Your video is neither clear nor slick (verbal explanation with random camera angles is a poor way for most people to absorb information), and needs visual aids at least.
I watched a couple minutes of explanation, and then zoned out. Sorry.
I’m guessing from your low funds that you’ve also done little to evangelize your concept, or have been unsuccessful in doing so.
I also think Kickstarter for coding projects is a high barrier, since so much programming is volunteer open-source projects, people wonder why they’d donate extra. And you offer no rewards for backers, a key element of Kickstarter’s concept.
You also say in the first paragraph “the project isn’t going well”.
It’s not really arbitrage if you lose money.
I think the same essential abuses that exist with debt exist here, so time-limitation (say equity stops yielding returns after 15-20 years, and can be discharged by bankruptcy) is important.
I worry about abuses when the equity stake is high. If you’re a mentor, and your investment decides they don’t really want to prioritize income maximization, what will you do?
Would the way to optimize returns involve hiring those you’ve invested in yourself (or arranging some convenient swap, if such direct employment is forbidden), and perhaps result in a system that looks either nepotistic or like indentured servitude?
Most of my objections melt away with shorter term investments. Still, equity is a much more natural fit on the project/business level.
And taking on low-earning degrees isn’t particularly an information issue. It’s a people-make-poor-decisions-at-18 issue. Data about majors’ earnings are readily available.
Owning a car can have a large advantage over leasing, if you are likely to keep the car a long time. An owned car can also become a second backup car, that it’s no big deal if it breaks down) if you get a newer one. Leasing two cars at once is a big waste of money.
Owning versus renting a home is not clear cut at all. Renting a house is a huge money pit unless you are frequently moving, and renting an apartment vs. owning a house is often not comparable in lifestyle. Owning a house gives you the property and flexibility, and can result in longer-term wealth building.
I used to be a poor student, and while I had a few indulgences I was frugal by virtue of being unable to afford some things. Now I have a job that makes plenty of money, and I spend it on things I would have once considered a poor value or even outright wasteful (uber instead of public transit, order something on Amazon I am unlikely to use).
I imagine if I had much much more money, I’d spend it on things I consider wasteful now.
So, Harry should probably have figured this out 60 chapters ago.
But we’ll cut him some slack.
You’ve dropped out of the lower end of Flow, the optimal level of challenge for a task.
You’ve solved the intellectually interesting nugget, or believe you have, and now all that’s left are the mundane and often frustrating details of implementation. Naturally you’ll lose some motivation.
So you have to embrace that mundanity, and/or start looking at the project differently.
That’s too strong. For instance, multi-person and high-noise environments will still have room for improvement. Unpopular languages will lag behind in development. I’d consider “solved’ to mean that the speech-processing element of a Babelfish-like vocal translator would work seamlessly across many many languages and virtually all environments.
I’d say it will be just below the level of a trained stenographer with something like 80% probability, and “solved” (somewhat above that level in many different languages) with 30% probability.
With 98% probability it will be good enough that your phone won’t make you repeat yourself 3 times for a simple damn request for directions.
I think drones will probably serve as the driver of more advanced technologies—e.g. drones that can deposit and pick up payloads, ground-based remote-controlled robots with an integration of human and automatic motion control.
Right—I agree that Go computers will beat human champions.
In a sense you’re right that the techniques are general, but are they the general techniques that work specifically for Go, if you get what I’m saying. That is, would the produce similar improvements when applied to Chess or other games? I don’t know but it’s always something to ask.
I am in the NLP mindset. I don’t personally predict much progress on the front you described. Specifically, I think this is because industrial uses mesh well with the machine learning approach. You won’t ask an app “where could I sit” because you can figure that out. You might ask it ’what brand of chair is that” though, at which point your app has to have some object recognition abilities.
So you mean agent in the sense that an autonomous taxi would be an agent, or an Ebay bidding robot? I think there’s more work in economics, algorithmic game theory and operations research on those sorts of problems than in anything I’ve studied a lot of. These fields are developing, but I don’t see them as being part of AI (since the agents are still quite dumb).
For the same reason, a program that figures out the heliocentric model mainly interests academics.
There is work on solvers that try to fit simple equations to data, I’m not that familiar.
I’m not asking for sexy predictions; I’m explicitly looking for more grounded ones, stuff that wouldn’t win you much in a prediction market if you were right but which other people might not be informed about.
I think NLP, text mining and information extraction have essentially engulfed knowledge representation.
You can take large text corpora like the and extract facts (like Obama IS President of the US) using fairly simple parsing techniques (and soon, more complex ones) put this in your database in either semi-raw form (e.g. subject—verb—object, instead of trying to transform verb into a particular relation) or use a small variety of simple relations. In general it seems that simple representations (that could include non-interpretable ones real-valued vectors) that accommodate complex data and high-powered inference are more powerful than trying to load more complexity into the data’s structure.
Problems with logic-based approaches don’t have a clear solution, other than to replace logic with probabilistic inference. In the real world, logical quantifiers and set-subset relations are really really messy. For instance a taxonomy of dogs is true and useful from a genetic perspective, but from a functional perspective a chihuahua may be more similar to a cat than a St. Bernard. I think instead of solving that with a profusion of logical facts in a knowledge base, it might be solved by non-human interpretable vector-based representations produced from, say, a million youtube videos of chihuahuas and a billion words of text on chihuahuas.
Google’s Knowledge Graph is a good example of this in action.
I know very little about planning and agents. Do you have any thoughts on them?
That’s what I know most about. I could go into much more depth on any of them.
I think Go, the board game, will likely fall to the machines. The driving engine of advances will shift somewhat from academia to industry.
Basic statistical techniques are advancing, but not nearly as fast as these more downstream applications, partly because they’re harder to put to work in industry. But in general we’ll have substantially faster algorithms to solve many probabilistic inference problems, much the same way that convex programming solvers will be faster. But really, model specification has already become the bottleneck for many problems.
I think at the tail end of 10 years we might start to see the integration of NLP-derived techniques into computer program analysis. Simple prototypes of this are on the bleeding edge in academia, so it’ll take a while. I don’t know exactly what it would look like, beyond better bug identification.
What more specific things would you like thoughts on?
After talking with a friend, I realized that the unambitious, conformist approach I’d embraced at work was really pretty poisonous. I’d become cynical, and realized I was silencing myself at times and not trying to be creative, but I really didn’t feel like doing otherwise.
My friend was much more ambitious, and had some success pushing through barriers that tried to keep her in her place, doing just the job in her job description. It wasn’t all that hard for her; I’d just gotten too lazy and cynical to do this myself after mild setbacks.
The bureaucratic element was a very good idea by the Gatekeeper.
How superhuman does an AI have to be to beat the Kafkaesque?
You may be right. Hackernews then. An avowed love of functional programming is a sure sign of an Auteur.
Yes! Like those.
I think you’re being a bit harsh though—the problem with personality tests and the like is not that the spectrums or clusters they point out don’t reflect any human behavior ever at all, it’s just that they assign a label to a person forever and try to sell it by self-fulfilling predictions (“Flurble type personalities are sometimes fastidious”, “OMG I AM sometimes fastidious! this test gets me”).
Professional/Auteur is a distinction slightly more specific than personality types, since it applies to how people work. It comes from the terminology of film, where directors range from hired-hands to fill a specific void in production to auteurs whose overriding priority is to produce the whole film as they envision it, whether this is convenient for the producer or not. Reading and listening to writers talk about their craft, it’s also clear that there’s a spectrum from those who embrace the commercial nature of the publishing industry and try hard to make that system work for them (by producing work in large volume, by consciously following trends, etc.) to those who care first and foremost about creating the artistic work they envisioned. In fact, meeting a deadline with something you’re not entirely satisfied with vs. inconveniencing others to hone your work to perfection is a good example of diverging behavior between the two types.
There are other things that informed my thinking like articles I’d read on entrepreneurs vs. executives, foxes vs. hedgehogs, etc.
If I wanted to make this more scientific, I would focus on that workplace behavior aspect and define specific metrics for how the individual prioritizes operational and organization concerns vs. their own preferences and vision.
After looking at the website, it’s still a very complicated idea, it’s totally unclear what a small-scale implementation is supposed to look like (i.e. what impact does this have in a small community of enthusiasts, what incentive does anyone have to adopt it, etc.).
I can’t evaluate your math because I don’t see any math in your website or kickstarter. I see references to Bayes and Information theory, but no actual math.