Singing can help people to bond

Phys.org:

“The difference between the singers and the non-singers appeared right at the start of the study. In the first month, people in the singing classes became much closer to each other over the course of a single class than those in the other classes did. Singing broke the ice better than the other activities, getting the group together faster by giving a boost to how close classmates felt towards each other right at the start of the course.

“In the longer term, it appears that all group activities bring people together similar amounts. In non-singing classes ties strengthened as people talked to each other either during lessons or during breaks. But this is the first clear evidence that singing is a powerful means of bonding a whole group simultaneously.”

Here’s something that occurs to me: during casual music play, I’ve never listened to lyrics. The songs I’ve listened to with English lyrics, I treat the singing as a kind of instrument, and I might not be able to tell you what the words are. Don’t know why. What I do know is that I prefer foreign vocals to English ones, because words that I can understand create a kind of noise that interferes with the sound shapes.

Not sure if that’s more related to synesthesia or misanthropy.

Researchers find memories stored in individual neurons

Sebastian Anthony, for ExtremeTech:

We know that a cluster of neurons firing can trigger the memory of your first kiss — but why? How can 100 (or 100,000) neurons, firing in a specific order, conjure up a beautifully detailed image of an elephant? We’ve already worked out how images are encoded by the optic nerve, so hopefully MIT isn’t too far away from finding out.

I think memory neurons are so compact because they’re just shorthand recipes that call upon our resident media library.

Raising families in space

Richard Hollingham, for BBC:

The first generation of colonists born in space will have parents with a strong connection to Earth. It is more intriguing to examine how the colonists’ grandchildren and their grandchildrens’ children will adapt to life in the new environment. Space, not Earth, will be their home.

The fastest theoretical journey to the nearest star outside our Solar System, travelling at close to the speed of light, is more than 500 years. Think how humans on Earth have changed in that time.

I’ve been thinking about these questions as I conduct research for a book about interstellar life. To me it’s no question that these things will happen, and we’ll certainly have to come to terms with the moral questions as they’re raised. But what’s far more interesting to me is how we get to our destination(s). Generation ship? Sleeper ship? Seed ship? Each has its benefits.

Also there’s the question of genetic diversity: how many people it would take to populate a new planet? As it turns out, it’s a pretty big number: 40,000. Meaning that the building of that generation ship will be quite an undertaking.

Not to give anything away, but I’m betting on a sleeper ship/seed ship hybrid, with the assumption that gene synthesis will help round out the population once the relatively small colony is established.

The Rise of Computer-Aided Explanation

Michael Nielsen, for Quanta Magazine:

Using this statistical model, the computer could take a new French sentence — one it had never seen before — and figure out the most likely corresponding English sentence. And that would be the program’s translation.

When I first heard about this approach, it sounded ludicrous. This statistical model throws away nearly everything we know about language. There’s no concept of subjects, predicates or objects, none of what we usually think of as the structure of language. And the models don’t try to figure out anything about the meaning (whatever that is) of the sentence either.

Despite all this, the IBM team found this approach worked much better than systems based on sophisticated linguistic concepts. Indeed, their system was so successful that the best modern systems for language translation — systems like Google Translate — are based on similar ideas.

The difference between knowing how to model something vs. understanding why something works is something to ponder. Is knowing how less valuable than understanding why? In most applications, probably not. Either way, you can complete the task at hand, and a statistical model may even be able to extrapolate beyond the bounds of the original question; may even be an aid to understanding.

Curbed interviews Syd Mead

Patrick Sisson, for Curbed:

What were your influences for Blade Runner?
For a city in 2019, which isn’t that far from now, I used the model of Western cities like New York or Chicago that were laid out after the invention of mass transit and automobiles, with grids and linear transport. I thought, we’re at 2,500 feet now, let’s boost it to 3,000 feet, and then pretend the city has an upper city and lower city. The street level becomes the basement, and decent people just don’t want to go there. In my mind, all the tall buildings have a sky lobby, and nobody goes below the 30th floor, and that’s the way life would be organized.

Mead is a legend whose work has inspired me since I was a kid.

Digital forensics can identify you by the way you type

Dan Goodin, for Ars Technica:

The profiling works by measuring the minute differences in the way each person presses keys on computer keyboards. Since the pauses between keystrokes and the precise length of time each key is pressed are unique for each person, the profiles act as a sort of digital fingerprint that can betray its owner’s identity.

When you think about it, everything you are is a print, everything you do is a signature.

How the easy editing of DNA changes the world

Amy Maxmen, for Wired:

Crispr goes well beyond anything the Asilomar conference discussed. It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.

It’s an epic tale of a technology with broad implications. Sadly, the petty struggle over patents casts a shadow over the entire thing. The best and worst of humanity on display.

Rust: we’ll give you your race and sex, just like life

Christian Nutt, for Gamasutra:

The catch? Gender will be randomly assigned, which mirrors Facepunch’s policy on race. If gender functions the same as race in Rust, it will also be tied to the player’s SteamID and unchangeable.

“I would love nothing more than if playing a black guy in a game made a white guy appreciate what it was like to be a persecuted minority,” Newman said at the time the race feature was introduced.

Already an interesting social experiment, the game Rust is about to get more interesting.

Living with mirror-touch synesthesia

Erika Hayasaki, for Pacific Standard:

Salinas is peculiarly attuned to the sensations of others. If he sees someone slapped across the cheek, Salinas feels a hint of the slap against his own cheek. A pinch on a stranger’s right arm might become a tickle on his own. “If a person is touched, I feel it, and then I recognize that it’s touch,” Salinas says.

The condition is called mirror-touch synesthesia, and it has aroused significant interest among neuroscientists in recent years because it appears to be an extreme form of a basic human trait. In all of us, mirror neurons in the premotor cortex and other areas of the brain activate when we watch someone else’s behaviors and actions. Our brains map the regions of the body where we see someone else caressed, jabbed, or whacked, and they mimic just a shade of that feeling on the same spots on our own bodies. For mirror-touch synesthetes like Salinas, that mental simulacrum is so strong that it crosses a threshold into near-tactile sensation, sometimes indistinguishable from one’s own. Neuroscientists regard the condition as a state of “heightened empathic ability.”

It can’t be easy living with an extra channel of sensory input. On some level, it’s just another sense. But our senses are meant to tell us about the world we’re experiencing, and this condition is more akin to interference from a neighboring radio station. It’s noise, even if noise itself can occasionally be put to use.

I have synesthesia, but the closest it gets to mirror-touch is that it’s nigh impossible for me to watch people dance.

Living with face blindness

Alexa Tsoulis-Reay, from Science of Us, talks with someone with profound Prosopagnosia:

Say I showed you a bowl of fruit for 20 seconds. You would remember it as a bowl of fruit. If I let some time pass and asked you to tell me where the apple, pears, and bananas were positioned, you probably wouldn’t be able to. You would have to stare at that bowl of fruit, and commit it to memory, and you would have to know that you had to commit it to memory when you were looking at it.

To tell people apart I have to find a distinguishing feature. And context is huge. If I’m expecting to see somebody, I’ll figure out who they are by observing their body language, listening to their voice. Good-looking people are the most difficult to recognize.

This person’s description really paints a picture. When a key component to getting by in society goes missing, one has to rely on brute force methods, like memory and visual association, just to build an approximation of recognition.

Ear bud tech promises tuning the sounds of the world around you

Nathan McAlone, for Business Insider Australia:

Doppler Labs isn’t interested in blocking out all natural noise and pumping pre-recorded sound into your ears. The team wants to change the sounds that are coming in. They want you to customise your sonic world in exactly the way you want.

Imagine being able to turn up the bass at a concert, or reduce the sound of a baby crying on a plane. That’s Kraft’s vision, and Doppler’s new “Here” active listening system is how he says he’ll prove it’s possible.

The technology is still in early development, but I think I’d prefer something like this to music. I don’t like listening to music in public, maybe because of the synesthesia — it makes me feel confined and distracted at the same time. But tuning ambient sound may be just the thing.

The people who need very little sleep

Helen Thomson, for BBC:

What would you do if you had 60 days of extra free time a year? Ask Abby Ross, a retired psychologist from Miami, Florida, a “short-sleeper”. She needs only four hours sleep a night, so has a lot of spare time to fill while the rest of the world is in the land of nod.

“It’s wonderful to have so many hours in my day – I feel like I can live two lives,” she says.

Short-sleepers like Ross never feel lethargic, nor do they ever sleep in. They wake early – normally around four or five o’clock – raring to get on with their day. Margaret Thatcher may have been one – she famously said she needed just four hours a night, whereas Mariah Carey claims she needs 15.

I’ve always been quick to sleep and quick to wake. I don’t need much more than five hours of sleep. The only difference is that I’m a night owl — without the need to match the schedule of others, I’d quickly revert to my natural 4 AM to 9 AM schedule.

Who was M.C. Escher?

Alastair Sooke, for BBC:

Yet, if we’re honest, how much do most of us really know about its creator, the Dutch printmaker MC Escher (1898-1972)? The truth is that outside his homeland Escher remains something of an enigma. Moreover, despite the popularity of his fastidious optical illusions, Escher continues to suffer from snobbery within the realm of fine art, where his output is often denigrated as little more than technically accomplished graphic design.

In Britain, for instance, it appears that only a single work by Escher belongs to a public collection: the woodcut Day and Night, which presents two flocks of birds, one black and one white, flying above a flat Dutch landscape in between a pair of rivers. Day and Night was Escher’s most popular print: during the course of his lifetime, he made more than 650 copies of it, painstakingly rendering each impression with the help of a small egg spoon made of bone.

I grew up admiring the works of the great sci-fi artists: Barlowe, Burns, FossGiger, McQuarrie, Mead, Whelan. But there was also Escher, a stranger who came from another place. I had several giant books of his prints that I would just stare at for hours.

The role of texting (and talking) in the future of UI

Kyle Vanhemert, for Wired:

Last December, in a blog post on Chinese app trends, designer and engineer Dan Grover announced the emergence of “chat as a universal UI.” Grover had moved from San Francisco to Guangzhou to work as a product manager for popular messaging app WeChat and noted the advent of “official accounts” for brands and public figures on the service. “Think SmarterChild but for banks, phone companies, blogs, hospitals, malls, and government agencies,” he explained, likening the accounts to the friendly AIM bot of yore. Today’s WeChat users ask their bank about their balance much like you and I once pestered SmarterChild for movie times.

WeChat official accounts don’t merely let users “connect” with a company or service in the same sense that Twitter lets users “connect” with Velveeta. The accounts provide utility that the rest of the smartphone-using world tends to compartmentalize into apps. As Benedict Evans, mobile guru at a16z, has noted, a WeChat user can “send money, order a cab, book a restaurant or track and manage an ecommerce order, all within one social app.”

Just as mechanical pushbuttons were an abstraction between users and the engines that ran things behind the scenes, user interfaces are an abstraction of interaction. Sometimes you need to see a lot of complex actions at the same time, presented in a way that’s easy to consume. But the rest of the time maybe all you need to do, instead of logging into something and looking for something to type in or tap on, is to ask a question: hey, remind me, when did I buy that hovercraft, and how much was it? A sufficiently advanced interface moves toward being entirely transparent, until it’s nothing more than an extension to conversation. The genie in the bottle comes to mind.

We don’t look the way we think we look

Medical Xpress, from the British Journal of Psychology:

Results of the study show that the unfamiliar participants chose a different set of ‘good likeness’ images compared to those that people had selected of themselves. Surprisingly, the images selected by strangers also led to better performance on the online face matching test. The size of the advantage in other-selection over self-selection was quite large—self-selected images were matched seven per cent less accurately compared to other-selected images.

White said: “It seems counter-intuitive that strangers who saw the photo of someone’s face for less than a minute were more reliable at judging likeness. However, although we live with our own face day-to-day, it appears that knowledge of one’s own appearance comes at a cost. Existing memory representations interfere with our ability to choose images that are good representations or faithfully depict our current appearance.”

Interesting, especially because this isn’t depersonalization or prosopagnosia, but a run-of-the-mill inability (or aversion) to perceive what we see when we see ourselves. But I can understand it — when I look in the mirror, all I see is a smudge of pixels.

“Passive frame theory” paints consciousness as reflexive interpreter

San Francisco State University:

Because the human mind experiences its own consciousness as sifting through urges, thoughts, feelings and physical actions, people understand their consciousness to be in control of these myriad impulses. But in reality, Morsella argues, consciousness does the same simple task over and over, giving the impression that it is doing more than it actually is.

“We have long thought consciousness solved problems and had many moving parts, but it’s much more basic and static,” Morsella said. “This theory is very counterintuitive. It goes against our everyday way of thinking.”

That’s for sure. Everything about this seems so wrong… which is what I like about it. Consciousness seems to be the thing in charge, because it’s the window we see through. Or so it tells us. But maybe we give too much credit to the messenger.

Experiment confirms future measurement at quantum level affects the past

Australian National University:

Physicists at The Australian National University (ANU) have conducted John Wheeler’s delayed-choice thought experiment, which involves a moving object that is given the choice to act like a particle or a wave. Wheeler’s experiment then asks – at which point does the object decide?

Common sense says the object is either wave-like or particle-like, independent of how we measure it. But quantum physics predicts that whether you observe wave like behavior (interference) or particle behavior (no interference) depends only on how it is actually measured at the end of its journey. This is exactly what the ANU team found.

“It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it,” said Associate Professor Andrew Truscott from the ANU Research School of Physics and Engineering.

Showing us, once more, that things only get confusing when we observe them.

LEGO Kit Instructions vs. Creativity

Garth Sundem, for GeekDad:

Basically, is the shift toward kit- rather than free-building creating a generation of sheep-brained automatons, suited only to Laverne-and-Shirley-like assembly line work rather than to the creation of new and novel ideas? (Not to be, you know, alarmist or anything…)

This was the question Page Moreau of the Wisconsin School of Business and Marit Gundersen Engeset of Buskerud and Vestfold University in Kongsberg, Norway, brought to the lab (with the paper just accepted at the Journal of Marketing Research): does following LEGO instructions make you less creative? They frame the question in terms of mindsets–“Recent research on mindsets has demonstrated that an individual’s behavior or thought processes in one situation can influence their thoughts and behaviors in later, unrelated tasks,” they write.

Then:

Now let’s get to the science. In a first experiment, Moreau and Engeset stuck 136 undergrads in the lab, had them free-build or kit-build with LEGOs, and then measured their performance on well-defined and ill-defined problems. Specifically, after building they had them solve 25 analogies (well-defined, with set rules and each with only one correct answer) or draw and title small doodles, starting with a couple squiggles to spark the imagination (ill-defined, with almost no rules and infinite “answers”).

Here’s how they describe the results: “Participants tackling the well-defined problem [kit building] received a lower creativity score than those solving the ill-defined problem [free building] … participants in the well-defined condition scored lower on both originality and abstractness than their counterparts in the ill-defined condition.”

This kind of thing fascinates me. I was always more interested in so-called “free building” than in “kit building.” Any instruction manuals were always pushed aside — I didn’t want to build someone else’s model to spec. I preferred to explore.

“Brain to Text” system translates brain activity into complete sentences

Neuroscience News:

Speech is produced in the human cerebral cortex. Brain waves associated with speech processes can be directly recorded with electrodes located on the surface of the cortex. It has now been shown for the first time that is possible to reconstruct basic units, words, and complete sentences of continuous speech from these brain waves and to generate the corresponding text. Researchers at KIT and Wadsworth Center, USA present their “Brain-to-Text” system in the scientific journal Frontiers in Neuroscience.

“It has long been speculated whether humans may communicate with machines via brain activity alone,” says Tanja Schultz, who conducted the present study with her team at the Cognitive Systems Lab of KIT. “As a major step in this direction, our recent results indicate that both single units in terms of speech sounds as well as continuously spoken sentences can be recognized from brain activity.”

This “Brain-to-Text” process is very cool, there’s no denying it. But I hope this moves us closer to the time when we can record our dreams for playback later — “Brain-to-Video” is what I’m waiting for.

Voices in our heads shaped by our culture

Clifton B. Parker, for Stanford Report:

The striking difference was that while many of the African and Indian subjects registered predominantly positive experiences with their voices, not one American did. Rather, the U.S. subjects were more likely to report experiences as violent and hateful – and evidence of a sick condition.

The Americans experienced voices as bombardment and as symptoms of a brain disease caused by genes or trauma.

Then:

Why the difference? Luhrmann offered an explanation: Europeans and Americans tend to see themselves as individuals motivated by a sense of self identity, whereas outside the West, people imagine the mind and self interwoven with others and defined through relationships.

“Actual people do not always follow social norms,” the scholars noted. “Nonetheless, the more independent emphasis of what we typically call the ‘West’ and the more interdependent emphasis of other societies has been demonstrated ethnographically and experimentally in many places.”

As a result, hearing voices in a specific context may differ significantly for the person involved, they wrote. In America, the voices were an intrusion and a threat to one’s private world – the voices could not be controlled.

However, in India and Africa, the subjects were not as troubled by the voices – they seemed on one level to make sense in a more relational world.

I’ve long wondered why this seems to be the case, though not from my own personal experience. At least it’s an interesting mirror with which we might look at the shape of our own culture.

Unused Blade Runner footage edited together for alternate take on original story

Wow, what a gold mine. This “B-Roll Cut” of Blade Runner is a 45-minute alternate take of the story, mostly made up of unused (and narration-heavy) footage. It’s a little incoherent at times, as you might expect, but still totally fascinating. Love it.

It really takes me back to see sci-fi shot in this way, with Vangelis’ score. I have to wonder whether the upcoming sequel can approach the moody tone set by the original masterpiece.

Elon Musk’s satellite swarm to provide global internet service?

Cecilia Kang and Christian Davenport, for The Washington post:

Elon Musk’s space company has asked the federal government for permission to begin testing on an ambitious project to beam Internet service from space, a significant step forward for an initiative that could create another major competitor to Comcast, AT&T and other telecom companies.

The plan calls for launching a constellation of 4,000 small and cheap satellites that would beam high-speed Internet signals to all parts of the globe, including its most remote regions. Musk has said the effort “would be like rebuilding the Internet in space.”

If successful, the attempt could transform SpaceX, based in Hawthorne, Calif., from a pure rocket company into a massive high-speed-Internet provider that would take on major companies in the developed world but also make first-time customers out of the billions of people who are currently not online.

Ambitious, to be sure, and I’m all for it. Musk is on a roll, and his heart is in the right place, and he has the resources to pull something like this off. Still, I suspect this is only the beta program for Musk’s even more ambitious plan to deliver fast internet access to Mars.

Why Music Makes Our Brain Sing

Robert J. Zatorre and Valorie N. Salimpoor, for the New York Times:

More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain.

When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine.

But what may be most interesting here is when this neurotransmitter is released: not only when the music rises to a peak emotional moment, but also several seconds before, during what we might call the anticipation phase.

I know people for whom music is a background thing, which is fine. I can even admire that. But for me, a good piece of music stops me in my tracks. I can’t multitask to it. All I can see is the shapes and colors.

I remember the first time I heard Vaughan Williams’ Fantasia on a Theme by Thomas Tallis. I was in the car, and had to pull it over to the side of the road. The goosebumps it gave me were painful. I really do think the piece is that shatteringly good — so much so that I have to avoid listening to it.

Selective amnesia: routinely forgetting familiar things

Alison Beard interviews UCLA professor Alan Castel for Harvard business Review:

It would be overwhelming and maladaptive to mentally record everything we see. So subconsciously we let some things fall away. The most famous experiment on this topic showed that few people can correctly recall the placement of the features on a penny—which way Lincoln is facing or where the word “liberty” goes. It’s a familiar object, yet we don’t focus on its details.

Other work has shown that the same is true for calculator keypads, computer keyboards, elevator buttons, and aspects of road signs. My colleagues Adam Blake and Meenely Nazarian and I thought that we might get different findings with the Apple logo. It’s also extremely familiar—nowadays maybe even more so than a penny—but it’s simpler. It’s designed to be aesthetically pleasing, and for many it’s a symbol of high value. But perhaps because it’s so ubiquitous and basic, our study subjects clearly hadn’t committed its details to memory. Only one got every part of the logo right, and just seven could draw it with three or fewer errors. And when we put the actual Apple logo in a lineup with seven altered versions, only 47% of people could identify it. We all know it looks like the fruit, but most of us don’t pay attention to the bite or the leaf. And that’s natural. We don’t burden ourselves with information we don’t think we’ll need to use.

I can draw the Apple logo as if I were tracing it. Then again, I can remember how many bars are in the IBM logo. Apparently there’s some part of my mind that believes the information will be important one day.

Our personalities determine how easy it is to hold eye contact

Suomen Akatemia, at the Academy of Finland:

Previous research has suggested that eye contact triggers patterns of brain activity associated with approach motivation, whereas seeing another person with his or her gaze averted triggers brain activity associated with avoidance motivation. This indicates that another person’s attention is something important and desirable. However, many people find it discomforting and may even experience high levels of anxiety when they are the focus of someone’s gaze.

I’ve always found it excruciating to hold someone’s gaze. I’ve had a lot of practice over time, but it’s still something I’d rather avoid, given the choice.