We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
Onnimikki writes James Stewart, author of the calculus textbooks many of us either loved or loved to hate, has died. In case you ever wondered what the textbook was funding, this story has the answer: a $32 million dollar home over-looking a ravine in Toronto, Canada.
166 comments | yesterday
KentuckyFC writes Statisticians have long thought it impossible to tell cause and effect apart using observational data. The problem is to take two sets of measurements that are correlated, say X and Y, and to find out if X caused Y or Y caused X. That's straightforward with a controlled experiment in which one variable can be held constant to see how this influences the other. Take for example, a correlation between wind speed and the rotation speed of a wind turbine. Observational data gives no clue about cause and effect but an experiment that holds the wind speed constant while measuring the speed of the turbine, and vice versa, would soon give an answer. But in the last couple of years, statisticians have developed a technique that can tease apart cause and effect from the observational data alone. It is based on the idea that any set of measurements always contain noise. However, the noise in the cause variable can influence the effect but not the other way round. So the noise in the effect dataset is always more complex than the noise in the cause dataset. The new statistical test, known as the additive noise model, is designed to find this asymmetry. Now statisticians have tested the model on 88 sets of cause-and-effect data, ranging from altitude and temperature measurements at German weather stations to the correlation between rent and apartment size in student accommodation.The results suggest that the additive noise model can tease apart cause and effect correctly in up to 80 per cent of the cases (provided there are no confounding factors or selection effects). That's a useful new trick in a statistician's armoury, particularly in areas of science where controlled experiments are expensive, unethical or practically impossible.
132 comments | 2 days ago
An anonymous reader writes: I graduated with a degree in the liberal arts (English) in 2010 after having transferred from a Microbiology program (not for lack of ability, but for an enlightening class wherein we read Portrait of the Artist). Now, a couple years on, I'm 25, and though I very much appreciate my education for having taught me a great deal about abstraction, critical thinking, research, communication, and cheesily enough, humanity, I realize that I should have stuck with the STEM field. I've found that the jobs available to me are not exactly up my alley, and that I can better impact the world, and make myself happier, doing something STEM-related (preferably within the space industry — so not really something that's easy to just jump into). With a decent amount of student debt already amassed, how can I best break into the STEM world? I'm already taking online courses where I can, and enjoy doing entry-level programming, maths, etc.
Should I continue picking things up where and when I can? Would it be wiser for me to go deeper into debt and get a second undergrad degree? Or should I try to go into grad school after doing some of my own studying up? Would the military be a better choice? Would it behoove me to just start trying to find STEM jobs and learn on the go (I know many times experience speaks louder to employers than a college degree might)? Or perhaps I should find a non-STEM job with a company that would allow me to transfer into that company's STEM work? I'd be particularly interested in hearing from people who have been in my position and from employers who have experience with employees who were in my position, but any insight would be welcome.
279 comments | 4 days ago
An anonymous reader sends this excerpt from Quanta Magazine:
"Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.
Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos. The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called "renormalization," which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, "a cat" regardless of its color, size or posture in a given video.
"They actually wrote down on paper, with exact proofs, something that people only dreamed existed," said Ilya Nemenman, a biophysicist at Emory University.
45 comments | about two weeks ago
KentuckyFC writes: One of the big applications for quantum computers is finding the prime factors of large numbers, a technique that can help break most modern cryptographic codes. Back in 2012, a team of Chinese physicists used a nuclear magnetic resonance quantum computer with 4 qubits to factor the number 143 (11 x 13), the largest quantum factorization ever performed. Now a pair of mathematicians say the technique used by the Chinese team is more powerful than originally thought. Their approach is to show that the same quantum algorithm factors an entire class of numbers with factors that differ by 2 bits (like 11 and 13). They've already discovered various examples of these numbers, the largest so far being 56153. So instead of just factoring 143, the Chinese team actually quantum factored the number 56153 (233 x 241, which differ by two bits when written in binary). That's the largest quantum factorization by some margin. The mathematicians point out that their discovery will not help code breakers since they'd need to know in advance that the factors differ by 2 bits, which seems unlikely. What's more, the technique relies on only 4 qubits and so can be easily reproduced on a classical computer.
62 comments | about two weeks ago
Recently, you had a chance to ask child prodigy, author and activist, Adora Svitak, about education and women In STEM and politics. Below you'll find her answers to your questions.
107 comments | about three weeks ago
First time accepted submitter Ugmug (1495847) writes Last year, University of Pennsylvania researchers Alexander J. Stewart and Joshua B. Plotkin published a mathematical explanation for why cooperation and generosity have evolved in nature. Using the classical game theory match-up known as the Prisoner's Dilemma, they found that generous strategies were the only ones that could persist and succeed in a multi-player, iterated version of the game over the long term. But now they've come out with a somewhat less rosy view of evolution. With a new analysis of the Prisoner's Dilemma played in a large, evolving population, they found that adding more flexibility to the game can allow selfish strategies to be more successful. The work paints a dimmer but likely more realistic view of how cooperation and selfishness balance one another in nature."
213 comments | about three weeks ago
We've mentioned several times over the years the Antikythera Mechanism, the astounding early analog computer recovered from a Greek shipwreck in shape good enough to allow modern recreations. The device has been attributed to different Greek mathemeticians and thinkers, such as Archimedes, Hipparchus, and Posidonius, but as reader puddingebola writes, "Current research suggests its origin may be much earlier, and its working based on Babylonian arithmetical methods rather than Greek Trigonometry, which did not exist at the time. Puddingebola excerpts from the NYT article: Writing this month in the journal Archive for History of Exact Sciences, Dr. Carman and Dr. Evans took a different tack. Starting with the ways the device's eclipse patterns fit Babylonian eclipse records, the two scientists used a process of elimination to reach a conclusion that the "epoch date," or starting point, of the Antikythera Mechanism's calendar was 50 years to a century earlier than had been generally believed.
62 comments | about three weeks ago
HughPickens.com writes Gerrymandering is the practice of establishing a political advantage for a particular party by manipulating district boundaries to concentrate all your opponents' votes in a few districts while keeping your party's supporters as a majority in the remaining districts. For example, in North Carolina in 2012 Republicans ended up winning nine out of 13 congressional seats even though more North Carolinians voted for Democrats than Republicans statewide. Now Jessica Jones reports that researchers at Duke are studying the mathematical explanation for the discrepancy. Mathematicians Jonathan Mattingly and Christy Vaughn created a series of district maps using the same vote totals from 2012, but with different borders. Their work was governed by two principles of redistricting: a federal rule requires each district have roughly the same population and a state rule requires congressional districts to be compact. Using those principles as a guide, they created a mathematical algorithm to randomly redraw the boundaries of the state's 13 congressional districts. "We just used the actual vote counts from 2012 and just retabulated them under the different districtings," says Vaughn. "If someone voted for a particular candidate in the 2012 election and one of our redrawn maps assigned where they live to a new congressional district, we assumed that they would still vote for the same political party."
The results were startling. After re-running the election 100 times with a randomly drawn nonpartisan map each time, the average simulated election result was 7 or 8 U.S. House seats for the Democrats and 5 or 6 for Republicans. The maximum number of Republican seats that emerged from any of the simulations was eight. The actual outcome of the election — four Democratic representatives and nine Republicans – did not occur in any of the simulations. "If we really want our elections to reflect the will of the people, then I think we have to put in safeguards to protect our democracy so redistrictings don't end up so biased that they essentially fix the elections before they get started," says Mattingly. But North Carolina State Senator Bob Rucho is unimpressed. "I'm saying these maps aren't gerrymandered," says Rucho. "It was a matter of what the candidates actually was able to tell the voters and if the voters agreed with them. Why would you call that uncompetitive?"
413 comments | about three weeks ago
An anonymous reader writes Last week, Riecoin – a project that doubles as decentralized virtual currency and a distributed computing system — quietly broke the record for the largest prime number sextuplet. This happened on November 17, 2014 at 19:50 GMT and the calculation took only 70 minutes using the massive distributed computing power of its network. This week the feat was outdone and the project beat its own record on November 24, 2014 at 20:28 GMT achieving numbers 654 digits long, 21 more than its previous record.
51 comments | about three weeks ago
A few days ago you had a chance to ask the people at Hampton Creek about about their products and the science of food. Below you'll find the answers to your questions from a number of Hampton Creek employees.
47 comments | about three weeks ago
HughPickens.com writes: Every year the works of thousands of authors enter the public domain, but only a small percentage of these end up being widely available. So how do organizations such as Project Gutenberg choose which works to focus on? Allen Riddell has developed an algorithm that automatically generates an independent ranking of notable authors for any given year. It is then a simple task to pick the works to focus on or to spot notable omissions from the past. Riddell's approach is to look at what kind of public domain content the world has focused on in the past and then use this as a guide to find content that people are likely to focus on in the future.
Riddell's algorithm begins with the Wikipedia entries of all authors in the English language edition (PDF)—more than a million of them. His algorithm extracts information such as the article length, article age, estimated views per day, time elapsed since last revision, and so on. This produces a "public domain ranking" of all the authors that appear on Wikipedia. For example, the author Virginia Woolf has a ranking of 1,081 out of 1,011,304 while the Italian painter Giuseppe Amisani, who died in the same year as Woolf, has a ranking of 580,363. So Riddell's new ranking clearly suggests that organizations like Project Gutenberg should focus more on digitizing Woolf's work than Amisani's. Of the individuals who died in 1965 and whose work will enter the public domain next January in many parts of the world, the new algorithm picks out TS Eliot as the most highly ranked individual. Others highly ranked include Somerset Maugham, Winston Churchill, and Malcolm X.
55 comments | about a month ago
Bennett Haselton writes: My last article garnered some objections from readers saying that the sample sizes were too small to draw meaningful conclusions. (36 out of 47 survey-takers, or 77%, said that a picture of a black woman breast-feeding was inappropriate; while in a different group, 38 out of 54 survey-takers, or 70%, said that a picture of a white woman breast-feeding was inappropriate in the same context.) My conclusion was that, even on the basis of a relatively small sample, the evidence was strongly against a "huge" gap in the rates at which the surveyed population would consider the two pictures to be inappropriate. I stand by that, but it's worth presenting the math to support that conclusion, because I think the surveys are valuable tools when you understand what you can and cannot demonstrate with a small sample. (Basically, a small sample can present only weak evidence as to what the population average is, but you can confidently demonstrate what it is not.) Keep reading to see what Bennett has to say.
246 comments | about a month ago
rossgneumann writes Soon, it's very possible that when you say something like "you have better odds of being struck by lightning," that won't necessarily mean it's all that rare. And there's a good chance that you'll be able to tell that person (roughly) what the odds of that happening are. Research published this week in Nature provides an equation that is reasonably accurate at mathematically predicting lightning strikes. From the article: "There's not a whole lot of noise in Romps's estimates: CAPE [Convective Available Potential Energy] is something that can be predicted out fairly easily: "All [models] in our ensemble predict that [the United State's] mean CAPE will increase over the 21st century, with a mean increase of 11.2 percent per degree Celsius of global warming," he wrote. "Overall, the [models] predict a ~50 percent increase in the rate of lightning strikes in the United States over the 21st century."
41 comments | about a month ago
An anonymous reader writes Alexander Grothendieck, one of the great eccentric geniuses of 20th century mathematics, has died in France at the age of 86. Grothendieck was the leading mind behind algebraic geometry. He was awarded the Fields Medal in 1966. He reached the very pinnacle of his profession before abandoning the discipline, taking up anti-war activism, retreating into the life of a recluse and refusing to share his research. He died on Thursday in a hospital in Saint-Girons in southwestern France.
49 comments | about a month ago
Bennett Haselton writes: A editorial with 24,000 Facebook shares highlights the differences in public reaction to two nearly identical breastfeeding photos, one showing a black woman and one showing a white woman, each breastfeeding an infant. The editorial decries the outrage provoked by the black woman's photo compared to the mild reaction elicited by the white woman's photo, and attributes the difference to racism. I tried an experiment using Amazon's Mechanical Turk to test that theory. Read on to see the kind of results Bennett found.
350 comments | about a month ago
rossgneumann writes If everyone always wants to look different than everybody else, everybody starts looking the same. At least, if you use a recently published mathematical model describing the phenomenon. "The hipster effect is this non-concerted emergent collective phenomenon of looking alike trying to look different," in the words of Jonathan Touboul, mathematical neuroscientist at the College de France in Paris.
176 comments | about a month ago
TaleSlinger writes: One of the great theories of modern cosmology is that the universe began in a "Big Bang", but the mathematical mechanism by which this occurred has been lacking. Cosmologists at the Wuhan Institute have published a proof that the Big Bang could indeed have occurred spontaneously because of quantum fluctuations. "The new proof is based on a special set of solutions to a mathematical entity known as the Wheeler-DeWitt equation. In the first half of the 20th century, cosmologists struggled to combine the two pillars of modern physics— quantum mechanics and general relativity—in a way that reasonably described the universe. As far as they could tell, these theories were entirely at odds with each other.
At the heart of their thinking is Heisenberg's uncertainty principle. This allows a small empty space to come into existence probabilistically due to fluctuations in what physicists call the metastable false vacuum. When this happens, there are two possibilities. If this bubble of space does not expand rapidly, it disappears again almost instantly. But if the bubble can expand to a large enough size, then a universe is created in a way that is irreversible. The question is: does the Wheeler-DeWitt equation allow this? "We prove that once a small true vacuum bubble is created, it has the chance to expand exponentially," say the researchers.
429 comments | about a month and a half ago
An anonymous reader writes Carnegie Mellon researchers have just launched Spliddit, a website that offers methods for helping people split rent, divide goods, and share credit. The novelty is that these methods are all "provably fair": there are mathematical proofs showing that each algorithm on the site provides rigorous fairness guarantees. For example, the method for splitting rent is guaranteed to be envy free: the assignment of rooms and division of rent is such that a housemate would never want to swap places with another housemate. All it takes is a pair of siblings to prove that there's no such thing as "provably fair," non-mathematically.
167 comments | about a month and a half ago
An anonymous reader writes: Carol Dweck, a psychology professor at Stanford, has done years of study on how students' attitudes affect their academic achievements. Her work began at the height of the "self-esteem movement," when parents were told to praise their kids' brainpower at every turn. But Professor Dweck found that praise for intelligence or talent — relatively immutable characteristics — only turned kids off of trying subjects they perceived as difficult, like math and science. Praising effort, perseverance, and problem-solving strategies works better. She also says, "There is such a thing as too much praise, we believe." Instead, she suggests engaging with kids about the process itself, showing interest and encouragement when they talk about how they did something.
273 comments | about a month and a half ago