Open thread, May 8 - May 14, 2017
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options “Notify me of new top level comments on this article” and “
For the past few months we’ve had three developers (Eric Rogstad, Oliver Habryka, and Harmanas Chopra) working on LW 2.0. I haven’t talked about it publicly much because I didn’t want to make promises that I wasn’t confident could be kept. (Especially since each attempt to generate enthusiasm about a revitalization that isn’t followed by a revitalization erodes the ability to generate enthusiasm in the future.)
We’re far enough along that the end is in sight; we’re starting alpha testing and I’m going to be start posting a status update in the Open Thread each Monday to keep people informed of how it’s going.
New research out of the Stanford / Facebook AI labs: They train an LSTM-based system to construct logical programs that are then used to compose a modular system of CNNs that answers a given question about a scene.
This is very important for the following reasons:
As a breakthrough in AI performance, it beats previous benchmarks by a significant margin.
Capable of learning to generate new programs even if only trained on a small fraction (< 4%) of possible programs.
Their strongly-supervised variant achieves super-human performance on all tasks within the CLEVR dataset.
It is much less of a black box than typical deep-learning systems: The LSTM creates an interpretable program, which allows us to understand the method by which the system tries to answer the question.
It is capable of generalizing to questions made up by humans, not found in its training data.
This is really exciting and I’m glad we’re moving further into the direction of “neural networks being used to construct interpretable programs.”
You might find this interesting.
The Strange Loop in Deep Learning
https://medium.com/intuitionmachine/the-strange-loop-in-deep-learning-38aa7caf6d7d
“The crux of the approach is the use of a ‘cycle-consistency loss’. This loss ensures that the network can perform the forward translation and followed by the reverse translations with minimal loss. That is, the network must learn how to not only translate the original image, in needs to also learn the inverse (or reverse) translation.”
I have a neat idea for a smartphone app, but I would like to know if something similar exists before trying to create it.
It would be used to measure various things in one’s life without having to fiddle with spreadsheets. You could create documents of different types, each type measuring something different. Data would be added via simple interfaces that fill in most of the necessary information. Reminders based on time, location and other factors could be set up to prompt for data entry. The gathered data would then be displayed using various graphs and could be exported.
The cool thing is that it would be super simple to reliably measure most things on a phone in a way that’s much simpler than keeping a spreadsheet. For example: you want to measure how often you see a seagull. You’d create a frequency-measuring document, entitle it “Seagull sightings”, and each time you open it, there’d be a big button for you to press indicating that you just saw a seagull. Pressing the button would automatically record the time and date, perhaps the location, when this happened. Additional fields could be added, like the size of the seagull, which would be prompted and logged with each press. With a spreadsheet, you’d have to enter the date yourself, and the interface isn’t nearly as convenient.
Another example: you’re curious as to how long you sleep and how you feel in the morning. You’d set up an interval-measuring document with a 1-10 integer field for sleep quality and reminders tied into your alarm app or the time you usually wake up. Each morning you’d enter hours slept and rate how good you feel. After a while you could look at pretty graphs and mine for correlations.
A third example: you can emulate the experience sampling method for yourself. You would have your phone remind you to take the survey at specific times in the day, whereupon you’d be presented with sliders, checkboxes, text fields and other fields of your choosing.
This could be taken further in a useful way by adding a crowd sourcing aspect. Document-templates could be shared in a sort of template marketplace. The data of everyone using a certain template would accumulate in one place, making for a much larger sample size.
I used to be very active in the Quantified Self community in the past but currently I still follow the Facebook group. As far as I know there’s no app that does a good job at this task.
For background research you might check out: https://gyrosco.pe/ http://www.inputbox.co/#/start http://www.reporter-app.com/ http://brainaid.com/
Here’s another app https://play.google.com/store/apps/details?id=com.zagalaga.keeptrack
I think that app also suffers from there being to many clicks if you have a bunch of values to track.
I think it would be unusable for my morning tracking because I have to make 3 clicks for every yes/no checkbox.
It reminds me of the story that when Jeff Bezos told his engineers to do 1-click buying they came back with a solution that took 4 clicks.
That’s incorrect if I understood right. Here’s how I use it:
Click reminder notification (or tracker-specific shortcut) to open
Click yes or no (you can have multiple of these in a single tracker)
Click save
So it’s 1 click to begin, 1 click for each choice, 1 click to save. There’s also support for input fields and lists with predefined values.
If you know an app that does this better, I am looking..
That’s three clicks.
If you have a Google form with a checklist of 10 items and you answer half of them with “yes” (tick them) you have 7 (if we count opening the form and sending the form) clicks instead of 30 (or maybe 25 if if you play with predefined values).
The main argument I’m making here is that there’s no app out there that really solves this problem well and there’s room for Sandi to create something better than the present options.
I probably explained badly. You can have 10 yes/no questions and fill half with 7 clicks using this app. It works exactly like the example you gave.
Is this what the paid “Multi value” feature does?
Yes.
I recommend the app warmly, but at the same time I’d be happy to switch if something with better design or features came up. I haven’t found anything as good yet.
Is there somewhere where I can see a good demo of how the multi value feature looks like in practice?
Thanks! I didn’t know this was such a developed concept already and that there are so many people trying to measure stuff about themselves. Pretty cool. I’ll check out Quantified Self and what’s linked.
I just went through my app list and found https://play.google.com/store/apps/details?id=org.odk.collect.android&hl=en
The Quantified Self facebook group is https://www.facebook.com/groups/quantifiedself/ . It might be another good place to get answer to your question.
Gleeo Time Tracker lets you define categories, and then use one click to start or stop tracking the category. You can edit the records and include more specific descriptions in them. You can export all data to spreadsheet. I use it to track my daily time, on very general level—how much I sleep, meditate, etc.
(Note: When you start integrating with other apps, there are almost unlimited options. You may want to make some kind of plugin system, write a few plugins yourself, and let other users write their own. Otherwise people will bother you endlessly to integrate with their favorite app.)
Looks like a database with some input forms.
That’s a reduction that is valid for many, many applications.
A fair point :-)
Yeah, that’s why I kept comparing it to a spreadsheet. Ease of use is a big point. I don’t want to write SQL queries on my phone.
The point is, this kind of problems is the wheel that every starting coder feels the need to reinvent. How much innovation there is in linking an on-screen UI element like a button with a predefined SQL query? (eh, don’t answer that, I’m sure there is a patent for it :-/)
Sure, you may want a personalized app that is set up just right for you, but in that case just pick the right framework and assemble your app out of Lego blocks. You don’t need to build your own plastic injection moulding machinery.
If you want to use spreadsheet software to do your data management you usually just use Excel instead of writing your own code for your own spreadsheet needs.
There’s no comparable well-developed app that does data entry on smart phones very well.
There are Android databases, e.g. Memento. Doing data entry “well” depends on your needs.
Data entry is a different problem than just having a database.
I installed Memento and let my start by listing how it screws up:
① The multiple choice field starts by showing me an empty drop down menu. I have to click on it for making it expand. After selecting my choices I have to click on “Ok” to get back to my form. That’s clearly two clicks to many.
② Automatic time tracking is even worse than Google Forms. The field that tracks the time get’s shown to the user with “current time” with the term current time meaning. There’s no reason why it can’t do the time tracking in the background. And if I fill out 5 entries there’s no reason why it can’t give me 5 time stamps. I would also want 5 time stamps with milliseconds and Memento apparantly thinks that nobody needs milliseconds.
③ It doesn’t do notifications. For a use case like morning tracking it’s good to have a notifaction. That means that I can press “on”, “click on the notifaction” and I’m right at my form. There’s no need to unlock the phone to enter data.
④ Many of the buttons are just to small. Yes, I can click on small buttons but it’s not as fast and if I want to build the habit to track something for QS purposes convenience matters a great deal.
So go complain to Memento’s creator how it doesn’t fit your use case :-P
I don’t think “data entry” is the problem that Memento is designed to solve well. From a QS standpoint a feature like notification seems key but from a “build a database”-standpoint it isn’t.
Thinking in terms of QS leads to getting as much timestamps as possible but if you design a database than you don’t add columns that the user doesn’t explicitly suggest he wants to have.
Your post reads as if you read my mind. :)
I currently use a mix between TapLog (for Android) and google forms (with an icon on my home screen so that it mimics a locally installed app).
Neither feels as if they really solve my needs, though. E.g. both lack a reminder feature.
What does TapLog lack, besides a reminder feature? It seems pretty nifty from the few screenshots I just saw.
TapLog is very nifty, it’s simply that it would be even better with a somewhat extended feature set.
Here’s one use case: I want to log my skin picking and skin care routine (morning/evening).
The first is easy. I just add a button to my home screen that increments by one every time I click it (which is every time I touch my face with my fingers). After a while I can plot number of picks each day, or month, or cumulative, etc. It’s very nice.
Logging my skin care routine is more difficult, since TapLog does not support lists. (Only quantity, and/or text-input [with an optional prompt], and/or gps position, for a single entry)
What I would like is for TapLog to let me predefine a list of items (shave, cleanse, moisturizer) then give me a push notification in the morning and/or evening requesting me to check off each item.
(If you use something like Wunderlist with a daily repeat of the list, it is very fragile. If you miss a couple of days you have to reset the date for the reminder, because there’s no way for unfinished lists to simply disappear unless you actually check them off. And in Wunderlist there’s no way to analyze your list data to see how well you did last month, etc.)
TapLog is designed for entering one piece of data at a time.
If you have a checklist with 10 items and on average 5 are “yes” you have to do 10 clicks. Basically “click 1 yes” “back” “click 2 yes” “back” “click 3 yes” “back” “click 1 yes” “back” “click 1 yes” “back” and “click 5 yes” “back”. If you have a Google form it only takes half as much clicks.
Besides pure click counting it’s also nice to see the checklist of 10 items together before clicking send to make sure that everything is right.
You could probably cobble something together with google forms.
https://docs.google.com/forms/u/0/
This can record the time and date of when the form was filled in. And it can go into a spreadsheet for easy analysis
Google Forms is very nice but it’s not optimized for the smart-phone form factor.
It’s not even optimized for gathering as much data as possible. It doesn’t give me a time stamp for every single data entry but only one timestamp when the form in finished. It also needs internet. There was a while when I purposefully deactivated my router in the first hours of the day to reduce distractions and that prevented me from doing my morning tracking with Google Forms.
Fair. I mentioned it as a way to validate how much use they would get out of the potential app and also as a way of finding the pain points.
I didn’t want to criticize that you mentioned it, I just wanted to provide the general reasons of why the solution has problems (which is important for someone wanting to build something better).
Not radical enough!
You not need to see a seagull, it is better to hear a seagull. Especially for the reason your vision angle is not all around and behind the tree.
Your hearing is.
And when you hear a seagull, your phone hears it too, and there is no need for “a popup insert record into the database Android/Apple gesture widget” shit. It can be done automatically every time!
You don’t want me to continue, do you?
I think this is the wrong way to look at this problem. You can easily build an app for detecting seagull by their sound but that app isn’t easily customizable and you can’t throw it easily against different problems.
There’s a reason why most of us still use paper from time to time. Multiple local rationalists I know use paper notebooks. That’s because paper is very flexible. For all it’s problem it still outperforms digital tool in many circumstances because it’s so adaptable.
Recognizing seagull by its voice, is a so called ARA—Absolutely Required Application.
We need a lot of them, billions, perhaps trillions. Or just one, flexible enough to substitute all of them.
Also known as the “better part of an AI”.
(Don’t bother, I have just invented this terminology. )
I’m not exactly sure how to respond, but maybe “don’t feed the trolls” is the way to go? I don’t see anything that Sandi did that motivated trolling her.
Some people have no imagination at all. In the “current year” is the database to feed on a smartphone. Which would be kind of okay, if it wouldn’t be:
Give me a break!
The answer is actually: There isn’t a good app for this purpose.
Writing complex AI isn’t the only unsolved problem. Writing a simple and functional general purpose data entry app for smartphones also happens to be a problem without a good solution.
This one wouldn’t be very good, either. To automatize the data input, that’s the way to go, if you ask me. He did ask, and I gave him an answer. Whether you folks like it or not.
The question he asked was “I would like to know if something similar exists before trying to create it”. Maybe, understanding the problem domain isn’t the only problem but also reading comprehension?
Answer only to what you have been asked?
“If you say X asked and I answered” that commonly means that you claim that your answer has something to do with the question. If you understand that your answer has nothing to do with the question there’s no need to point out that the person asked.
Just record everything (camera, microphone, GPS, your activity such as which web page you are reading), upload it to the cloud, and you can analyze it later.
Fixed that for you.
In most jurisdictions that runs into problem with the law.
Exactly!
There’s a weird cold war in software design, where everyone knows that they can use ‘security’ to win any argument, but we must all refrain from doing so, because that ratchet only goes one way.
The deal is that no one can ever argue against ‘security’, so you always win if you bring it up, but if you use that against me I’ll retaliate, and the project will fail (very very securely).
Also, unrelated, if I you ever hear someone bragging about their amazing release process, just nod and ask them about the emergency release process. That’s what they ACTUALLY use.
When we get into discussions about security, the best tools I’ve found are:
Attack Trees: If someone wants to add a new security feature they have to justify it by pointing at an attack that is not covered by other mitigations.
Cost/Risk analysis: Decide if it worth worrying about state-level actors/professionals criminals/script kiddies.
Doesn’t work for me. I am the guy saying “we should not be doing X, because when you google for X, the first three results are all telling you that you definitely shouldn’t be doing X”, and everyone else is “dude, you already spent the whole day trying to solve this issue, just do it the easy way and move on to the other urgent high-priority tasks”.
Probably depends on the type of a company, i.e. what is the trade-off between “doing the project faster” and “covering your ass” for your superiors. If they have little to lose by being late, but can potentially get sued for ignoring a security issue, then yes, this is really scary.
A possible solution is to tell the developer to just do it as fast as possible, but still in a perfectly secure way. Have daily meetups asking him ironically whether he is still working on that one simple task. But also make him sign a document that you can deduct his yearly salary if he knowingly ignores a security issue. -- Now he has an incentive to shut up about the security issues (to avoid giving a proof that he knew about them).
“A possible solution is to tell the developer to just do it as fast as possible, but still in a perfectly secure way. ”
Thanks, Satan!
Ain’t no such thing.
Which particular corner of software do you have in mind?
All of it?
I mean, not seriously, but I’ve done 2 decades in the industry, at a total of 5 companies, and I see it everywhere.
Dev A: We should do this with a cloud based whatver. Dev B: No, no, we should stick with our desktop app. Bosses: Hmm… Dev A (triumphantly): No, no, putting everything on the cloud is BEST PRAKTUS!!!! Bosses: (Gasp!!) Dev B: (in desperation, transgressing...) What about....security? Bosses (Double gasp) Dev A; (disbelief) You wouldn’t.… Dev B: A’s mad scheme exposes us to the viruses and also the worms. Bosses: We agree with B!
Dev A: You realize, of course, this means war.
(Much later)
Dev B: I’m just saying that we could try ‘not’ encoding every string in pig latin, as most people would be able to decrypt this with minimal effort and it is massively increasing our translation budgets Dev A: So you are in favor of making our software less secure? Dev B: hahahah, no, of course not. That was just a test. I’m a double red belt qualified expert in Security Singing from every App academy. I was just making sure that you were too.
There are elements and leanings toward this combative view of security in a whole lot of companies, both in IT departments and in software-focused corporations. I haven’t seen even a small fraction of such places (only maybe a few hundred directly and indirectly), but it seems rare that it gets to strategic levels (aka cold war with each side hesitant to change the status quo) - most places are aware of the tradeoffs and able to make risk-estimate-based decisions. It helps a LOT to have developers do the initial risk and attack value estimates.
I’ll agree about the emergency/patch deployment process being the one to focus on. There’s something akin to Gresham’s law in ops methodology—bad process drives out good.
“developers do the initial risk and attack value estimates”
You mean trust in-house devs? Heresy! If they were any good they wouldn’t work here! Only consultants can be relied upon.
heh. Consultants are the people who couldn’t meet our hiring bar, so we pay them twice as much to avoid any long-term responsibility for outcomes. They are useful at making sure our devs have asked the right questions and considered the right options. But the actual analysis and decision starts and ends on the team (and management) that’s going to actually run the system and deal with the consequences.
Not everywhere, and not as completely sane as I’m stating it—there’s a lot of truth in Dilbert. But if it’s too bad where you are, go elsewhere. There are good software teams and they’re hiring.
Do you have a reliable way to distinguish good teams from bad ones, before you sign the paperwork and put in your notice?
I’ve stayed in jobs I wanted to leave a couple of times now, because my team was a reasonably good team and I was afraid that elsewhere I would end up with Dilbert’s boss.
Not terribly reliable, but you can get a start by asking Joel’s questions (https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-steps-to-better-code/) and Kate Mats ones (http://katemats.com/questions-for-candidates-to-ask-in-an-interview/).
More importantly, the overall software dev market is such that you can change 3-4 times in one year without really limiting your prospects, as long as you can explain what you’re looking for and why you think the next one is it. You probably can’t do that two years in a row, but trying a new job isn’t a life sentence, it’s an exploration.
Why do you care if you’re running in a simulation? Well, if you’re a piece of malware, like a computer virus or Internet worm, you care whether you’re being analyzed by an anti-malware researcher. If you’re running in a researcher’s lab instead of actually infecting target systems, you’d want to shut down so the researcher can’t observe your real behavior and find out how to exterminate you. However, this means that if you can be fooled into believing you’re running in a simulation when you’re not, you can be made to shut down in the wild.
Yep, that happened. It’s not even the first time.
The WannaCrypt worm contained logic that basically said, “Look for the following properties in the Internet. If you observe them, that means you’re not running in the real Internet; you’re running in a simulation.” But the researcher was able to cause those properties to become true in the real Internet, thereby convincing the live malware that was infesting the actual Internet to believe it was in a simulation and shut down.
Anti-analysis or anti-debugging features, which attempt to ask “Am I running in a simulation?”, are not a new thing in malware, or in other programs that attempt to extract value from humans — such as copy-protection routines. But they do make malware an interesting example of a type of agent for which the simulation hypothesis matters, and where mistaken beliefs about whether you’re in a simulation can have devastating effects on your ability to function.
Why do you think a piece of software has the same goals as it’s creator? My conscious planning mind doesn’t have the same goals as evolution.
Current software doesn’t even have goals, it has behaviors. Ascribing desires and decision-making to it leads to incorrect beliefs. AIs will have goals, but they’ll be influenced and shaped by their creators rather than being fully specified.
Can we make a list of all the best (maybe not the best, but the ones people use) implementation intentions/TAPs for rationality? That would be instantly useful to anyone who encounters it.
Also, making a list for general TAPs/implementation intentions LWers find useful in their life would also be very helpful to everyone.
I don’t have enough karma to even make a post in discussion, so can someone take up my quest?
My impression is that this wouldn’t as useful as the outside view might suggest because TAP’s tend to be pretty individualized.
Also, I think that you’d get more value out of learning the general schema of rationality techniques (EX: what TAPs are vs what TAPs people use).
That said, I have a few of them in this post here that you may like to check out.
Do you find it demotivating to do mathematics which is assigned to you in school compared doing mathematics personally? I’m currently having difficulty getting myself doing mathematics thats assigned to me.
It works similarly for me with programming. I love programming, except when I have a programming task assigned, and must provide reports of how long it took me to solve it, and must debate whether what I am doing now is the highest priority or whether I should be doing something else instead (such as googling for existing solutions for this or similar problems)...
What you need to feel good at deep work) is:
a task that optimally fits your abilities (not boringly simple, not scarily difficult);
a meaningful context (even if the meaning is: I am playing / exploring);
a clear goal (so you have a feedback whether you are approaching it);
a working environment without distractions.
Another problem:
https://protokol2020.wordpress.com/2017/05/07/problem-with-perspective/
Okay… so this draws on a couple of things which can be confusing. 1) perspective projections 2) mapping spheres onto 2D planes.
Usually when we think of a field of vision we imagine some projection that maps the 3D world in front of us to some 2D rectangle image. And that’s all fine and well. We don’t expect the lines in the image to conserve the angles they had in 3D.
I think what the author of the post is saying is that if you use a cylindrical projection that wraps around 360 degrees horizontally, then the lines will appear parallel when you unwrap it. But there’s nothing wrong with this. If it seems like it would be a contradiction, because the lines cross each other at right angles in 3D—it’s because in a z-aligned cylindrical projection, the point where the lines cross will be on one of the singularities that sit on each pole. And if the cylindrical projection is not z-aligned, the lines won’t be parallel, and will cross each other at some angle.
I guess you can also think of this as two projections. There is the two lines on the floor, which are projected up onto the bird’s panoramic view (a sphere), and then the sphere is projected onto a z-aligned cylinder, and then the cylinder is unwrapped to give us our 2D image with the two lines parallel.
Like how if you projected two perpendicular lines up onto the bottom of this globe they might align with say, 0“/180” and 90“/270”, but they would appear parallel on the output cylindrical projection
There is no real paradox here, of course. At least not in reality. Only in a bird’s head perhaps, when he says:
Those two birds in front of me are flying parallely; one is going North, one is going West.
Well if the bird knows they fly apparently parallel, then he’s good.
This is assuming that by “perspective” we mean something like “projection onto a sphere”. Then the lines become great semicircles and it’s true that they are parallel at the horizon, at least in the sense that the great circle representing the horizon meets them each at a right angle.
By “perspective” I mean the fact, that a stick twice as far away appears twice as short. Or z-times as far appears z-times shorter. (Providing that no rotation has been invoked.)
Poor bird thus see all the directions parallel. Which is difficult to imagine, but it just must be so.
Then everyone would. What is the difference between a bird flying 2m above the ground, and a 2m tall human? Do all directions seem parallel to you?
Yes, they do. In a distance all directions seems parallel.
Except that I don’t deal with than many directions at once. I never see a bird flying to the West near the horizon, and a bird flying to the East near the horizon at the same time. A bird does see that at once. It sees how they fly apart of each other and fly parallel at the same time. It is counterintuitive for me, but not for the bird, I guess.
I can however, see two birds flying away from me, one to the North, other to the West, both far away. They become smaller and smaller, but the apparent distance between them remains practically unchanged.
I quickly rationalize this as an interesting illusion, at the most.
If you surround the bird B with a ten-meter-radius sphere and map each point A on the ground to the intersection between the line segment AB and the sphere, the x and y axis map to a total of four curves along the lower half of the sphere, all of which are, in fact, parallel at the equator.
This way, only smaller parts of lines are parallel (parallel enough), while in reality—or should I say, on the plane—the biggest part of those lines are parallel.
Mapping on the sphere, even mental, doesn’t account for that. And the bird must know this, because it flies miles and miles.
Most of reality maps to near the equator, therefore the bird’s eye would evolve to have most receptors near the equator and most of its visual cortex would focus there. (Assuming that things don’t become more important to the bird as they grow nearer :P)
Short enough to just post here rater than linking:
Is there an unstated assumption that the panoramic view is accomplished by mapping to a human-evolved ~135 degree field of view? I don’t think this would happen in a brain evolved and trained on panoramic eyes/sensors. It doesn’t happen in reality, where panoramic views exist everywhere and are generally accessed by turning our heads.
Closer objects must appear bigger and this kind of perspective is inescapable for us, cameras or birds.
From here the apparent parallelism of the two, from a single point outgoing lines—follows. How then a 360 degrees vision creature handle this? When the straight road going to the North, is parallel to another straight road going to the West, which is parallel to yet another straight road going to the South? At least in some distance and then to the horizon.
I have an idea, but first I am asking you. How?
Parallel lines appear to intersect according to perspective. But, the more distant parts of the lines are the parts that appear to intersect. Here, where the lines actually do intersect, the more distant parts are away from the intersection. If these are ideal lines such that one could not gauge distance, and one is only looking downward, such as a projection onto a plane, then they are visually indistinguishable from parallel lines. Whether that’s the same thing as them appearing to be parallel may be … a matter of perspective. But, since this is a bird with 360 degree view, it can see that the lines do not also extend above the bird as parallel lines would, so they do not appear parallel to it.
Actually parallel lines appear to intersect.
Actually non-parallel lines which goes out from a point at angle alpha, appear to be parallel far away from your standing point, above the intersection point.