What 3D takes away from a movie

Daniel Engber, for The New Yorker:

I liked the Herzog movie, as well as Godard’s, which made a fetish of its glitchy, sloppy stereography. But I worry that films like these reveal an overarching and myopic ideology, in which 3-D serves as anti-art, or as a tool for the puncturing of spectacle. That mixes up 3-D with the ways that it’s been marketed: it takes for granted that the format really does perform a kind of empty-headed pyrotechnics, and that it really is a marker of excess.

But the secret of 3-D—its central irony, let’s say—is that it isn’t any good for spectacle. Adding a dimension often serves to shrink the objects on the screen, instead of giving them more pomp; trees and mountains end up looking like pieces in a diorama; people seem like puppets. Action, too, suffers in the format, because rapid horizontal movements mess with the illusion and fast-paced edits in 3-D tend to wear a viewer out.

As the author notes, I think 3D is great for documentaries. I’ve seen two documentaries in 3D, and both provided a pleasing sense of immediacy—the same thing that breaks the wall of action movies. I think that’s what movie studios miss: that effect that sells expensive tickets doesn’t necessarily help tell a story. In fact, it can make things look smaller—as the article says—as if they’re in a diorama.

When the Color We See Isn’t the Color We Remember

John Hopkins University news release:

In a new paper published in the Journal of Experimental Psychology: General, researchers led by cognitive psychologist Jonathan Flombaum dispute standard assumptions about memory, demonstrating for the first time that people’s memories for colors are biased in favor of “best” versions of basic colors over colors they actually saw.

For example, there’s azure, there’s navy, there’s cobalt and ultramarine. The human brain is sensitive to the differences between these hues —we can, after all, tell them apart. But when storing them in memory, people label all of these various colors as “blue,” the researchers found. The same thing goes for shades of green, pink, purple, etc. This is why, Flombaum said, someone would have trouble glancing at the color of his living room and then trying to match it at the paint store.

This is interesting. There must be more to it though: I have excellent color perception in the moment, but I’ll often misremember red as green, etc.

Slower thinking during the heat death of the universe

Paul Halpern, on Medium, quoting Freeman Dyson:

It is impossible to calculate in detail the long-range future of the universe without including the effects of life and intelligence. It is impossible to calculate the capabilities of life and intelligence without touching, at least peripherally, philosophical questions. If we are to examine how intelligent life may be able to guide the physical development of the universe for its own purposes, we cannot altogether avoid considering what the values and purposes of intelligent life may be. But as soon as we mention the words value and purpose, we run into one of the most firmly entrenched taboos of twentieth-century science.

As Dyson imagined, a sense of purpose would motivate cognizant life to try to maintain itself as long as humanly — and then transhumanly — possible. Ultimately, humans and other possible intelligent beings in the universe might elect to transfer their conscious awareness to artificial storage and processing units — presuming that artificial intelligence (AI) is possible.

As the universe continued to cool, our AI descendants would need to take action. Unlike Asimov, Dyson does not suggest a mechanism for reversing the growth of entropy. Rather, he imagines a gradual slowing down of thinking processes. Only necessary thoughts would transpire and these would happen at an increasingly snail-like pace. Between thoughts, the AI devices would hibernate to conserve vital, usable energy. By spacing out thoughts more and more, Dyson argues, intelligent existence could persist almost indefinitely, although the number of total thoughts would still be finite.

Dyson’s notion of eternal intelligence is certainly interesting to ponder. My assumption would be that, given (almost) all the time in the world, an intelligent race might learn how to transfer their minds to a sturdy and sustainable medium, as Dyson says. But I’d also assume that such a race might also learn the trick of manipulating the universe to its own ends. And who’s to say that hasn’t already happened many times over?

A Higgs mass, fixed by hypothetical “axion” particle?

Natalie Wolchover, for Quanta Magazine:

What if the Higgs mass, and by implication the laws of nature, are unnatural? Calculations show that if the mass of the Higgs boson were just a few times heavier and everything else stayed the same, protons could no longer assemble into atoms, and there would be no complex structures — no stars or living beings. So, what if our universe really is as accidentally fine-tuned as a pencil balanced on its tip, singled out as our cosmic address from an inconceivably vast array of bubble universes inside an eternally frothing “multiverse” sea simply because life requires such an outrageous accident to exist?

And:

The story of the new model begins when the cosmos was an energy-infused dot. The axion mattress was extremely compressed, which made the Higgs mass enormous. As the universe expanded, the springs relaxed, as if their energy were spreading through the springs of the newly created space. As the energy dissipated, so did the Higgs mass. When the mass fell to its present value, it caused a related variable to plunge past zero, switching on the Higgs field, a molasseslike entity that gives mass to the particles that move through it, such as electrons and quarks. Massive quarks in turn interacted with the axion field, creating ridges in the metaphoric hill that its energy had been rolling down. The axion field got stuck. And so did the Higgs mass.

It’s a remarkable narrative, and a model that brings up as many questions as it purports to answer. So, the research will continue slowly, and not without challenge, as it should. But meanwhile, it’s fascinating to imagine that an accident of the remotest odds might give rise to a universe capable of contemplating itself.

Aligning machine intelligence with human values

Quanta Magazine’s Natalie Wolchover interviews computer scientist Stuart Russell about the future of artificial intelligence:

You could say machines should err on the side of doing nothing in areas where there’s a conflict of values. That might be difficult. I think we will have to build in these value functions. If you want to have a domestic robot in your house, it has to share a pretty good cross-section of human values; otherwise it’s going to do pretty stupid things, like put the cat in the oven for dinner because there’s no food in the fridge and the kids are hungry. Real life is full of these tradeoffs. If the machine makes these tradeoffs in ways that reveal that it just doesn’t get it — that it’s just missing some chunk of what’s obvious to humans — then you’re not going to want that thing in your house.

I don’t see any real way around the fact that there’s going to be, in some sense, a values industry. And I also think there’s a huge economic incentive to get it right. It only takes one or two things like a domestic robot putting the cat in the oven for dinner for people to lose confidence and not buy them.

Then there’s the question, if we get it right such that some intelligent systems behave themselves, as you make the transition to more and more intelligent systems, does that mean you have to get better and better value functions that clean up all the loose ends, or do they still continue behaving themselves? I don’t know the answer yet.

Essentially, the interview is a fantastic primer into the kinds of things computer scientists are up against when it comes to designing intelligent machines. Not just the mechanics of it, but encoding rules based on the way we think, and certain moral codes. All of which means that we have to be able to quantify qualities.

Designing stormtroopers, then and now

Vanity Fair’s Bruce Handy talks to costume designer Michael Kaplan about his experience with costume design, and how things have changed since his original stint with Blade Runner:

I think I was the only one who read the script and felt that it should have an old Sam Spade, old gumshoe kind of feeling. When I said that, it kind of hit a note and I got the job.

I learned from Ridley how great it is to re-use things and make new things out of things that already exist in a way, where you’re kind of not even recognizing the object that you started with. I like digging around in thrift shops and I don’t know if that’s a signature but it’s something that I’ve done a lot in my work.

And:

I went up to George Lucas’s archives—huge building—and just spent a day going through sketches and looking just to get the tone of the movie, you know, in my guts and veins so that when I went to London I felt equipped and inspired, which I certainly did.

But the old stormtroopers uniforms would not be usable. Audiences of today have become so sophisticated that a lot of things you could get away with in the past, you can’t anymore. So the new uniforms are much heavier. Also, the action in the film required them to not be “VacuFormed” [like the old uniforms] as those all broke and cracked. These new ones are much more heavy-duty, but they are redesigned, too, they’re not the same stormtroopers.

I love that sort of behind the scenes insight.

ILM: an oral history

Alex French and Howie Kahn, for Wired:

Industrial Light & Magic was born in a sweltering warehouse behind the Van Nuys airport in the summer of 1975. Its first employees were recent college graduates (and dropouts) with rich imaginations and nimble fingers. They were tasked with building Star Wars’ creatures, spaceships, circuit boards, and cameras. It didn’t go smoothly or even on schedule, but the masterful work of ILM’s fledgling artists, technicians, and engineers transported audiences into galaxies far, far away.

As it turns 40 this year, ILM can claim to have played a defining role making effects for 317 movies. But that’s only part of the story: Pixar began, essentially, as an ILM internal investigation. Photoshop was invented, in part, by an ILM employee tinkering with programming in his time away from work. Billions of lines of code have been formulated there. Along the way ILM has put tentacles into pirate beards, turned a man into mercury, and dominated box office charts with computer-generated dinosaurs and superheroes. What defines ILM, however, isn’t a signature look, feel, or tone—those change project by project. Rather, it’s the indefatigable spirit of innovation that each of the 43 subjects interviewed for this oral history mentioned time and again. It is the Force that sustains the place.

It’s the epic tale of a company that builds epic tales. But it’s also a hundred fascinating personal details strung together. I’ve been a fan of ILM since I was a kid, and I look forward to watching their work for another 40 years of childhood.

Video game players more likely to navigate using response vs. spatial learning

Press release from Douglas Mental Health University Institute, on ScienceDaily:

A new study … shows that while video game players (VGPs) exhibit more efficient visual attention abilities, they are also much more likely to use navigation strategies that rely on the brain’s reward system (the caudate nucleus) and not the brain’s spatial memory system (the hippocampus).

Past research has shown that people who use caudate nucleus-dependent navigation strategies have decreased grey matter and lower functional brain activity in the hippocampus.

Video gamers now spend a collective three billion hours per week in front of their screens. In fact, it is estimated that the average young person will have spent some 10,000 hours gaming by the time they are 21. The effects of intense video gaming on the brain are only beginning to be understood.

More research needs to be done, but that item about the implications of spatial vs. response learning does not bode well. There’s some related information — more focused on addiction, but still relevant — in a 2013 article posted in Hippocampus:

Spatial strategies involve learning the spatial relationships between the landmarks in an environment, while response learning strategies involve learning a rigid set of stimulus-response type associations, e.g., see the tree, turn left. We have shown that spatial learners have increased gray matter and fMRI activity in the hippocampus compared with response learners, while response learners have increased gray matter and fMRI activity in the caudate nucleus.

People with similar views mimic each other’s speech patterns

Monique Patenaude, for the University of Rochester’s NewsCenter:

As social creatures, we tend to mimic each other’s posture, laughter, and other behaviors, including how we speak. Now a new study shows that people with similar views tend to more closely mirror, or align, each other’s speech patterns. In addition, people who are better at compromising align more closely.

“Few people are aware that they alter their word pronunciation, speech rate, and even the structure of their sentences during conversation,” explained Florian Jaeger, associate professor of brain and cognitive sciences at the University of Rochester and coauthor of the study recently published in Language Variation and Change. “What we have found is that the degree to which speakers align is socially mediated.”

“Our social judgments about others and our general attitude toward conflict are affecting even the most automatic and subconscious aspects of how we express ourselves with language,” said lead-author Kodi Weatherholtz, a post-doctoral researcher in Jaeger’s lab.

I find this fascinating because it’s something I’ve noticed with my own speech. I’ve always had an almost Tourette’s inclination to parrot — something that’s gotten me in trouble on more than one occasion. But what we’re talking about here is a more sustained pattern of speech, including vocabulary, cadence, and manner of articulation. And, in the latter case, it’s more like matching a dialect in the service of easing conversation than it is automatic mimicry.

Spacetime as the reaction of quantum action

Jennifer Oulette, for Wired:

It is common to speak of a “fabric” of space-time, a metaphor that evokes the concept of weaving individual threads together to form a smooth, continuous whole. That thread is fundamentally quantum. “Entanglement is the fabric of space-time,” said Swingle, who is now a researcher at Stanford University. “It’s the thread that binds the system together, that makes the collective properties different from the individual properties. But to really see the interesting collective behavior, you need to understand how that entanglement is distributed.”

Tensor networks provide a mathematical tool capable of doing just that. In this view, space-time arises out of a series of interlinked nodes in a complex network, with individual morsels of quantum information fitted together like Legos. Entanglement is the glue that holds the network together. If we want to understand space-time, we must first think geometrically about entanglement, since that is how information is encoded between the immense number of interacting nodes in the system.

The hypothetical scenario whereby we describe the folding of a slip of paper to a two dimensional being improved how I thought about higher dimensions in general. After that, such things as the manifestation of distance and time seemed more spooky, and quantum entanglement seemed less spooky.

Building No Man’s Sky’s life-sized digital universe

A lot of ink is being spilled about this lovely indie game, and surely the technology behind it is notable, regardless of the actual gameplay.

Raffi Khatchadourian, for the New Yorker:

To build a triple-A game, hundreds of artists and programmers collaborate in tight coördination: nearly every pixel in Grand Theft Auto’s game space has been attentively worked out by hand. [Chief architect Sean Murray] realized early that the only way a small team could build a title of comparable impact was by using procedural generation, in which digital environments are created by equations that process strings of random numbers.

The article cites Acornsoft’s Elite as one of the first games to employ procedural world generation, owing to the technique’s inherent economy. Generate things on the fly, and you only have to store the most basic parameters of each item as you encounter it. But procedural worlds also have their drawbacks. Pure chaos looks like nonsensical noise—or, even worse: monotonous. To avoid that, Murray and his team have developed techniques that bring some order to the chaos. But figuring out where that balance is struck is an interesting part of the process, too:

Because of No Man’s Sky’s algorithmic structure—with every pixel rendered on the fly—the topography would not be known until the moment of encounter. Theoretically, the game could quickly render a sample of the terrain before deciding that a particular pixel belonged to a river, but then it would also have to render a sample of the terrain surrounding that sample, and so on. “What would end up happening is what we call an intractable problem to which there is only a brute-force solution,” Murray said. “There’s no way to know without calculating everything.” After much trial and error, he devised a mathematical sleight of hand to resolve the problem. Otherwise, the computer would have become mired in building an entire world merely to determine the existence of a drop of water.

 

But how will we really reach the stars?

As important as dreaming about exotic EM engines—perhaps even more so—is thinking about various more feasible technologies. After all, those are the technologies we’ll actually develop first.

Astronomer Alastair Reynolds explores these questions in an article for Reuters:

That said, any civilization willing to contemplate an interstellar expedition at close to the speed of light might also settle for something half as fast, or a quarter as fast. It would just be a question of waiting a bit longer for the news. At 10 percent of the speed of light, an expedition could reach the nearest star within 50 years. Such a mission could be achieved using fusion technologies which aren’t too far beyond those now on the drawing boards, although slowing down at the other end does add to the difficulty. Still, 50 years is a long time by any measure. The astronaut Scott Kelly has just embarked on a one-year expedition to the International Space Station, and no one has yet spent longer than 14 consecutive months in space. Clearly we have some way to go before we can contemplate decades-long interstellar missions.

Language as a distributed communal object as much as the product of a “language gene”

Elizabeth Svoboda, for Nautilus:

[Computational linguist Simon] Kirby took a unique approach to probing the origins of language: He taught human participants novel languages he had made up. He and his colleagues showed human subjects cards with different shapes and pictures on them, taught them the words for these pictures, and tested them. “Whatever they do, whether they get it right or wrong, we teach it to the next person,” Kirby says. “It’s rather like the game Telephone.”

Remarkably, as the language passed from one learner to the next, it began to acquire cogent structure. After 10 generations, the language had changed to make it easier for human speakers to process. Most notably, it began to show “compositionality,” meaning that parts of words corresponded to their meaning—shapes with four sides, for instance, might all have a prefix like “ikeke.” Thanks to these predictable properties, learners developed a mental framework they could easily fit new words into. “Participants not only learn everything we show them,” Kirby says, “but they can correctly guess words we didn’t even train them on.”

Kirby realized that this process of iterated learning—which depended on brain function but extended beyond it—went a long way toward explaining where language structure came from. Having watched in the lab as ordered languages appeared, he’s skeptical when he sees colleagues get entrenched in purely biological explanations for language’s origins. “There’s been this assumption that brain and behavior are related very simply, but languages emerge out of huge populations of socially embedded agents. The problem with ‘gene for x’ or ‘grammar module y’ is they ignore how something that is the property of an individual is linked to something that is the property of a community.”

I like the idea that multi-generational social networks are an extension of our own neural networks. Also that we make language our own, based not just on basic morphology, but on social mores.

On the terraforming of Mars with microbes

LSU microbiologist Gary King, writing for Popular Mechanics:

If we want to grow life in the watery-subsurface of Mars, King says, the opening move is identifying the right spot to start. The scant amount of subsurface water recently discovered does not suddenly transform Mars into a fertile Eden. However, “there’s no reason to suspect that the entirety of the planet is effectively sterile—that Mars is so limiting, and so extreme, that it can’t support any microbial life anywhere,” King says. On Earth, King adds, no matter how extreme an environment (“from the dry Atacama desert to geothermal vents under the Atlantic,” he says) life almost always finds a way.

As for finding the most potentially habitable spot, “that’s a task which I think basically continues to come down to a question of water,” King says. In other words, wherever Mars’s subsurface water pools the most, that’s where we’ll want to start.

Granted, the Martian water supply seems to be scarce at best, but it’s possible that places like the recurring slope lineae might be our wettest, best bet. This is a swath of land identified in 2011 by the Mars Reconnaissance Orbiter that visibly darkens with the seasons, suggesting that subsurface water there may ebb and flow in much greater amounts than was found in the Gale crater (where the evidence of liquid water was found last month).

As the article says, using microbes to terraform Mars is a speculative idea. But it’s a worthy thought experiment. Assuming there are no ethical quandaries, the research involved with such an endeavor increases the breadth of our knowledge in geoengineering, and that may help us here on this planet as well as others.

In Lifeline, the sole survivor of a spaceship crash relies on you for survival

Laura Hudson for Boing Boing:

As counterintuitive as it sounds, there’s something about interacting with Taylor through text messages that can feel very intimate, perhaps because we’ve grown so accustomed to communicating our most personal thoughts with our friends through texts—and waiting for their responses with bated breath.

While some mobile games intentionally frustrate players with waiting periods to compel them to spend money, waiting isn’t a coercion tactic in Lifelife, but rather a crucial part of the storytelling experience. If you die several times—or win the game—you can unlock an optional “fast mode” that allows you to skip the waiting periods, although I wouldn’t recommend it. While it might offer instant gratification, it also shatters the sense of immersion you feel, flattening the urgency and anticipation of those intermediate moments.

“When people are playing it, it’s not just about the time that they’re interacting with Taylor,” says Justus. “It’s all the rest of the time when they’re thinking about Taylor. The whole goal was to make something that would become a part of people’s lives.”

It’s such a clever concept — simple, yet well-implemented, and happening not in game time, but in real time. In fact, for me, verisimilitude brings with it a hidden catch: evoking a sense of real-time responsibility can be a distraction throughout one’s working day, not to mention feelings of guilt about leaving your stranded colleague waiting while you sit through actual meetings. Am I overstating the point? Possibly. It’s just a text game, after all. But games will only ask more of us, that much is clear. And the more invested we become in these simulations, the greater the emotional toll. I can’t wait.

Stellar stagecoaches, and interstellar possibilities

Brian McConnell, writing for Centauri Dreams:

A spaceship that is mostly water will be more like a cell than a conventional rocket plus capsule architecture. Space agriculture, or even aquaculture, becomes practical when water is abundant. Creature comforts that would be unthinkable in a conventional ship (hot baths anyone?) will be feasible in a spacecoach. Meanwhile, inflatable structures will eventually enable the construction of large, complex habitats that will be more like miniature O’Neill colonies than a conventional spaceship [4].

In [our book about spacecoaches], Alex and I present a reference design that combines inflatable structures and thin film PV arrays to form a kite-like structure that both has a large PV array area, and can be rotated to provide artificial gravity in the outer areas [5]. The ability to generate artificial gravity while providing ample radiation protection solves two of the thorniest problems in long duration spaceflight. Alex wrote an excellent fictional treatment of the concept for Centauri Dreams called Spaceward Ho! This is intended as a straw man design to kickstart design competitions. We envision a series of design competitions for water compatible electric propulsion technologies, large scale solar arrays, and overall ship designs.

McConnell and his colleagues have done a lot of thinking about interplanetary spacecraft, and aren’t afraid to challenge the notion that they should be constructed from the traditional rigid metal hull. Regardless of whether their ideas come to pass, I think these are the conversations we should be having more broadly.

Interview: Lola VFX team discusses digital makeup

fxguide’s interviews are technical and in-depth, and this one’s no different. This time out, Mike Seymour visited Lola VFX to interview several members of their team to discuss, among other things, their fine facial work on films such as Benjamin Button, The Social Network, and the Captain America movies.

But I have to say, one of my favorite parts is this aside:

Edson Williams: We do “repair” on actors who have had plastic surgery. They’ll come in with too much botox, for instance, and there’s no movement in their brows. So, there’s been a few projects where we’ve actually had to animate the brow to mimic the performance they should be giving. That’s happened on a few different projects.

Seymour: It’s interesting, isn’t it? A real face not looking real.

Too funny.

The linguistic mor-fuckin-phology of English infixation

Chi Luu, for JSTOR Daily:

In expletive infixation, common obscene expletives or their milder variants, such as fucking/fuckin, freaking, flipping, effing, goddamn, damn (and bloody/blooming in British and Australian English contexts) are inserted productively into words to express a stronger vehemence.

  1. absolutely: abso-fucking-lutely, abso-bloody-lutely, abso-goddamn-lutely, abso-freaking-lutely
  2. Minnesota: Minne-fucking-sota
  3. fantastic: fan-bloody-tastic

We can see how different expletives can be inserted in exactly the same space in the word absolutely. English speakers can also quickly note that constructions such as *ab-fucking-solutely (infixed after the first syllable) and *fanta-bloody-stic (infixed after the second syllable) are technically possible yet do not sound right (linguistically indicated by an asterisk). This is the case even though the expletive happily appears after the first syllable in fan-tastic but the second syllable after abso-lutely. They somehow violate the unwritten rules of this infixation construction. Why is this so?

As someone who loves me some wordplay, it’s fascinating to see these things being broken down. I like how certain unspoken rules quickly develop around an otherwise organic process — people know when a violation has occurred. A perfect example is the newer (and shorter-lived) “doge speak,” which linguist Gretchen McCulloch wrote about last year in The Toast last year: “A Linguist Explains the Grammar of Doge. Wow.

Synesthesia, and tasting sounds

Kate Samuelson, for Motherboard:

[James] Wannerton has a rare form of synaesthesia known as lexical-gustatory synaesthesia, meaning that his taste and hearing senses do not operate independently of each other. As a result, for Wannerton every word and every sound has a distinctive flavour. Although the words and sounds do not usually bear any relation to what they taste like, the flavours are always consistent; “speak”, for example, has tasted like bacon for as long as Wannerton can remember.

“Words and sounds go ‘bink, bink, bink’ in my mouth all the time, like a light flickering on and off,” he explained. “Some tastes are very quick but others can last for hours and make me crave that particular thing; I’ll feel distracted until I actually eat it.”

I have grapheme-colour and spatial sequence synesthesia, but it’s usually subtle. If I envision something, it always has the extra characteristics attached to it: the letter “A” is a red female, and the month of July is located just in front of me, a bit to my left (August is directly ahead). These things are never a distraction. At best, they may be useful for memorization, or they may inform superficial preferences. But I’m fascinated by these extreme cases; people whose experiences are akin to hallucinations, or even to sensory seizures.

Evaluating NASA’s Futuristic EM Drive (updated)

UPDATE: Wired’s Katie M. Palmer has weighed in, explaining in very clear terms why this project is pure fantasy. I’m still a dreamer, and thoughts of hard-to-explain advancements still get my heart racing… but in the end, it’s about the science, and dreaming alone isn’t enough to get us to the stars:

The reason the Eagleworks lab presents results in unrefereed conference proceedings and Internet posts, according to Eric Davis, a physicist at the Institute for Advanced Studies at Austin, is that no peer-reviewed journals will publish their papers. Even arXiv, the open-access pre-print server physicists default to, has reportedly turned away Eagleworks results.

Why the cold shoulder? Either flawed results or flawed theory. Eagleworks’ results so far are very close to the threshold of detection—which is to say, barely perceptible by their machinery. That makes it more likely that their findings are a result of instrument error, and their thrust measurements don’t scale up with microwave input as you might expect. Plus, the physics and math behind each of their claims is either flawed or just… nonexistent.

I am humbly chastened. Now, here’s my original post:

Has NASA built an engine that could get us from Earth to the surface of the moon within four hours? From Earth to Mars in 70 days? How about a trip from Earth to Alpha Centauri in just 92 years?

Such feats may become achievable using electromagnetic propulsion drives now being tested at NASA’s Johnson Space Center, related to similar ongoing experiments in the UK and China.

According to researchers, “a one-way, non-decelerating trip to Alpha Centauri under a constant one milli-g acceleration” from an EM drive would result in an arrival speed of 9.4 percent the speed of light.

The concept of an EM Drive as put forth by SPR was that electromagnetic microwave cavities might provide for the direct conversion of electrical energy to thrust without the need to expel any propellant.

This lack of expulsion of propellant from the drive was met with initial skepticism within the scientific community because this lack of propellant expulsion would leave nothing to balance the change in the spacecraft’s momentum if it were able to accelerate.

But:

Paul March, an engineer at NASA Eagleworks, recently reported in NASASpaceFlight.com’s forum (on a thread now over 500,000 views) that NASA has successfully tested their EM Drive in a hard vacuum – the first time any organization has reported such a successful test

And:

A community of enthusiasts, engineers, and scientists on several continents joined forces on the NASASpaceflight.com EM Drive forum to thoroughly examine the experiments and discuss theories of operation of the EM Drive.

The quality of forum discussions attracted the attention of EagleWorks team member Paul March at NASA, who has shared testing and background information with the group in order to fill in information gaps and further the dialogue.

This synergy between NASASpaceflight.com contributors and NASA has resulted in several contributions to the body of knowledge about the EM Drive.

It’s pretty inspiring that enthusiasts and scientists have been working together in bulletin boards to further science.

So, now, what are the roadblocks to bringing such a thing to production? It’s not power — the technology to power such engines already exists. No, a lot of it comes down to funding. And the forums have already lit up with Kickstarter-like schemes to get these development teams the funding they need.

This story is still very much in development, but you can follow the story as it unfolded in the cited forum here:

Topic: EM Drive Developments – related to space flight applications – Thread 2

And the newly-opened forum discussing this press release is here:

Topic: FEATURE ARTICLE: Evaluating NASA’s Futuristic EM Drive

P.S. Tell me this doesn’t set your brain on fire:

The ultimate goal is to find out whether it is possible for a spacecraft traveling at conventional speeds to achieve effective superluminal speed by contracting space in front of it and expanding space behind it.  The experimental results so far had been inconclusive.

Mastering a backward-steering bike challenges neuroplasticity (video)

Smarter Every Day’s Destin Sandlin was presented with a backward-steering bike as a kind of joke. But his utter inability to control it sent him on an 8-month obsessive quest to remap his brain. And, as the clickbait headlines say: you’ll never believe what happened next. But seriously, it’s fascinating to see that exact moment when it all clicks for him, like seeing a wild animal tamed.

Realistic artificial gravity in science fiction (video)

PBS Space Time host Gabe Perez-Giz examines several beloved sci-fi ships (and other constructions) to find out which might provide the most realistic feeling of gravity.

2001: A Space Odyssey introduced a lot of people to the idea of rotation based artificial gravity, but in sci-fi, it’s far from the only one to implement the idea! Babylon 5, Halo, and Ringworld also used rotation-based artificial gravity in their stores, but, being an astrophysicist I had to ask, WHO DOES IT BEST? And more importantly, is artificial gravity in space possible?

Connecting places geographically causes mental maps to merge

UCL News:

Realising how places connect geographically causes local maps in the brain to join, forming one big map which helps with planning future journeys, finds a new UCL study.

Changes like this can occur when people vary their route to work during a tube strike, for example. Commuters may be familiar with the location of two underground stations but only realise how one is linked to the other by walking between them. Knowing how the stations are connected can then be used to decide which route to take next time.

It’s an interesting feeling when two seemingly non-contiguous regions — hitherto mapped only during separate routines — end up connecting together. Something in your mind clicks, and you can actually feel the connection being made as the fog of war lifts to reveal a little bit more of the map.

Tinnitus mapped inside human brain (also: eeeeeeeeeeeee…)

Jonathan Webb, for BBC News:

In many cases it begins with partial hearing loss, sometimes due to loud noise wearing out the hair cells that convert sound waves into neural signals, inside the inner ear. The brain adjusts to that loss of input by boosting certain types of activity, creating the impression of a noise that nobody else can hear.

And:

Some earlier work has also suggested that a widespread network is involved in tinnitus, including brain areas outside those “auditory” sections that we know are involved in hearing. But this is the first time the abnormal activity of that network has been plotted in such detail.

I have tinnitus. Sometimes it’s quiet, and sometimes it’s distractingly loud. But the one thing that always makes it worse: thinking about it. Whatever advances research can make against this affliction, I welcome them.

“Artistic activity” helps to stave off cognitive decline in old age

Tom Jacobs, for Pacific Standard:

The behavior that had the greatest protective effect, at least in this relatively small study, was “artistic activity,” such as painting, drawing, and sculpting.

“Long ago, ‘an apple a day keeps the doctor away’ was a common expression,” Dr. James Galvin writes in a comment accompanying the study, which is published in the journal Neurology. “Perhaps today, the expression should expand to include painting an apple, going to the store with a friend to buy an apple, and using an Apple product.”

And:

Learning how to use a computer late in life had a highly positive impact—actually greater than for those who picked up the habit during their middle years. Perhaps seniors who discovered the joys of surfing the Web provided their brain with a new form of helpful stimulation.

So it’s not just about staying engaged and creative, but challenging the mind in novel ways. Noted.