I guess the thing I’m questioning is how well does Approval actually reflect the preferences of the electorate. Let’s use Melbourne as an example: it was a safe Labor seat, but is now controlled by the Greens (there a useful summary of the seat’s history at https://www.abc.net.au/news/federal-election-2016/guide/melb/ ). Under Approval, Liberal voters (who know their candidate isn’t going to win, and want to deny Labor the seat) would vote for the Greens over Labor (e.g. approve Liberal and Greens); Greens voters would only vote Greens (they definitely won’t vote Liberal); Labor vote Labor (as adding the Greens to the their vote only weakens their candidate). If the Greens win, Liberal voters can switch to Labor next election (causing more division); or if Labor wins can continue forcing Labor to spend resources on the Melbourne electorate—in both cases the result isn’t reflecting what people actually want, rather the number of people following how-to-vote cards (are how-to-vote cards a thing in the US/outside Australia?). IRV has the same problem, except due to the problem of choosing whether to approve a candidate or not (vs giving all preferences), people are more likely to follow the how-to-vote card from the appropriate party (which is going to be governed by strategic voting, meaning you’re getting more false information about the elecorate’s preferences). It’s also much harder to vote against someone under Approval, such as happened against Stephen Conroy ( https://www.pcworld.idg.com.au/article/355744/what_would_it_take_unseat_conroy_/ has some information, but I recall there were websites devoted to providing information about how to vote against Conroy — this would be much easier now since there were changes in the senate ballot in 2016).
aragilar
Why would you prefer Approval over IRV? I’m Australian, where we use IRV, and I’d find it significantly harder to work out who to vote for under Approval. Most voting examples I’ve seen (including jefftk) seem to have a small number of candidates, whereas here, in any seat that’s not safe, there’s at least 10+ candidates (where probably half I’d think about approving, but if I do that, then we’re basically back to a 2-party system, meaning that Approval is worse than IRV in reflecting my preferences), let alone the 100+ candidates for a senate seat (which is multi member, but is sufficiently close to IRV that treating them the same from a voter decision view is reasonable). I can see the advantage of a Condorcet method over IRV (I know the Debian project uses it for elections/project votes), but approval seems only slightly better than FPTP.
One thing no-one has mentioned yet is who is writing the code: I’m assuming your background is a web developer/software engineer? Most deep learning users I’ve encountered are ex-scientists. This strongly discounts your benefits. First, many of them only know one language (most likely one of Python, R or Matlab, but there’s a smattering of others which are used). Learning a new language is hard for these ex-scientists (remember, someone doing a CS degree will have likely seen 5+ languages), especially one like JavaScript: rapidly changing, libraries have breaking changes, and a whole new set of tools/processes to learn (e.g. minification or tree-shaking).
Second, while the JS community is huge, the vast majority is working on front-end web development, or adjacent fields. In many cases, systems like Anaconda provide a one-click install of all the libraries they need—JS on the other hand has no equivalent tool, and the libraries simply do not exist that they need (or are unmaintained—the reader library for the standard astronomy format (FITS) hasn’t been touched in years, and it doesn’t even support the full standard, whereas Python’s equivalent has massive community support and can both read and write FITS).
Third, these ex-scientists are going to write imperative code (for better or worse). Types are an extra thing to think about, and their eyes are going to gloss over if you try to explain the benefit of well designed type systems.
Forth, “write code once, run anywhere” does not exist for deep learning: look at the number of GPU clusters that have been built using NVidia GPUs (rather than AMD or a different chipset). Deep learning is highly tied to hardware, and especially on mobile, you are forced into a specific framework per device in order to get reasonable performance (which in some cases can be critical if a certain application is to succeed). In many cases, using JS would only add to the languages needed, rather than reduce them.
It’s possible that JS may at some point in the future evolve in such a way that it becomes a natural choice for deep learning, but it’s worth keeping the following things in mind:
There’s a general estimate that it takes at least 10 years for a (scientific/numerical) ecosystem to mature: Python’s is 20+ year old (numpy’s predecessors originate in the 90s). Maybe if we wait 10 years the issues with JS will have gone away (stability, diversity of libraries, easy-to-use distributions).
Few languages have broad usage across the different areas of numerical computing, instead they get pigeon-holed into a specific domain, or are domain-specific (see R for stats or STAN for Bayesian analysis)—Python itself still loses to R for stats (and note the use of r2py and Julia’s ability to call Python—languages are added, not removed).
Neither the language nor the tooling of JS especially support numerical computing: you need things Python’s slicing or tools like Cython to make it worth using (R and Matlab have similar language or tooling support). It’s possible JS will add these, but that would be a major change to the language (and you’ve got the lag time for such features to roll out, ex-scientists aren’t the type of people who continuously upgrade their environment).