Sunday, March 27, 2016

Everybody Loves Dopamine



Dopamine is love. Dopamine is reward. Dopamine is addiction.

Neuroscientists have a love/hate relationship with how this monoamine neurotransmitter is portrayed in the popular press.





[The claim of vagus nerve-stimulating headphones is worth a post in its own right.]



“You can fold your laundry, but you can’t fold your dopamine.”
- James Cole Abrams, M.A. (in Contemplative Psychotherapy)


The word dopamine has become a shorthand for positive reinforcement, whether it's from fantasy baseball or a TV show.

But did you know that a subset of dopamine (DA) neurons originating in the ventral tegmental area (VTA) of the midbrain respond to obnoxious stimuli (like footshocks) and regulate aversive learning?

Sometimes the press coverage of a snappy dopamine paper can be positive and (mostly) accurate, as was the case with a recent paper on risk aversion in rats (Zalocusky et al., 2016). This study showed that rats who like to “gamble” on getting a larger sucrose reward have a weaker neural response after “losing.” In this case, losing means choosing the risky lever, which dispenses a low amount of sucrose 75% of the time (but a high amount 25%), and getting a tiny reward. The gambling rats will continue to choose the risky lever after losing. Other rats are risk-averse, and will choose the “safe” lever with a constant reward after losing.

This paper was a technical tour de force with 14 multi-panel figures.1 For starters, cells in the nucleus accumbens (a VTA target) expressing the D2 receptor (NAc D2R+ cells) were modified to express a calcium indicator that allowed the imaging of neural activity (via fiber photometry). Activity in NAc D2R+ cells was greater after loss, and during the decision phase of post-loss trials. And these two types of signals were dissociable.2 Then optogenetic methods were used to activate NAc D2R+ cells on post-loss trials in the risky rats. This manipulation caused them to choose the safer option.

- click to enlarge -


Noted science writer Ed Yong wrote an excellent piece about these findings in The Atlantic (Scientists Can Now Watch the Brain Evaluate Risk).

Now, there's a boatload of data on the role of dopamine in reinforcement learning and computational models of reward prediction error (Schultz et al., 1997) and discussion about potential weaknesses in the DA and RPE model. So while a very impressive addition to the growing pantheon of laser-controlled rodents, the results of Zalocusky et al. (2016) aren't massively surprising.


More surprising are two recent papers in the highly sought-after population of humans implanted with electrodes for seizure monitoring or treatment of Parkinson's disease. I'll leave you with quotes from these papers as food for thought.

1. Stenner et al. (2015). No unified reward prediction error in local field potentials from the human nucleus accumbens: evidence from epilepsy patients.
Signals after outcome onset were correlated with RPE regressors in all subjects. However, further analysis revealed that these signals were better explained as outcome valence rather than RPE signals, with gamble gains and losses differing in the power of beta oscillations and in evoked response amplitudes. Taken together, our results do not support the idea that postsynaptic potentials in the Nacc represent a RPE that unifies outcome magnitude and prior value expectation.

The next one is extremely impressive for combining deep brain stimulation with fast-scan cyclic voltammetry, a method that tracks dopamine fluctuations in the human brain!

2. Kishida et al. (2016). Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward. 
Dopamine fluctuations in the striatum fail to encode RPEs, as anticipated by a large body of work in model organisms. Instead, subsecond dopamine fluctuations encode an integration of RPEs with counterfactual prediction errors, the latter defined by how much better or worse the experienced outcome could have been. How dopamine fluctuations combine the actual and counterfactual is unknown. One possibility is that this process is the normal behavior of reward processing dopamine neurons, which previously had not been tested by experiments in animal models. Alternatively, this superposition of error terms may result from an additional yet-to-be-identified subclass of dopamine neurons.


Further Reading

As Addictive As Cupcakes Mind Hacks (“If I read the phrase ‘as addictive as cocaine’ one more time I’m going to hit the bottle.”)

Dopamine Neurons: Reward, Aversion, or Both? Scicurious

Back to Basics 4: Dopamine! Scicurious (in fact, anything by Scicurious on dopamine)

Why Dopamine Makes People More Impulsive – Sofia Deleniv at Knowing Neurons

2-Minute Neuroscience: Reward System video by Neuroscientifically Challenged


Footnotes

1 For example:
Because decision-period activity predicted risk-preferences and increased before safe choices, we sought to enhance the D2R+ neural signal by optogenetically activating these cells during the decision period. An unanticipated obstacle (D2SP-driven expression of channelrhodopsin-2 eYFP fusion protein (D2SP-ChR2(H134R)-eYFP) leading to protein aggregates in rat NAc neurons) was overcome by adding an endoplasmic reticulum (ER) export motif and trafficking signal29 (producing enhanced channelrhodopsin (eChR2); Methods), resulting in improved expression (Extended Data Fig. 7). In acute slice recordings, NAc cells expressing D2SP-eChR2(H134R)-eYFP tracked 20-Hz optical stimulation with action potentials (Fig. 4c).

2 The human Reproducibility Project: Psychology brigade might be interested to see Pearson’s r2 = 0.86 in n = 6 rats.



References

Kishida KT, Saez I, Lohrenz T, Witcher MR, Laxton AW, Tatter SB, White JP, Ellis TL, Phillips PE, Montague PR. (2016). Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward. Proc Natl Acad Sci 113(1):200-5.

Schultz W, Dayan P, Montague PR. (1997). A neural substrate of prediction and reward. Science 275:1593–1599. [PubMed]

Stenner MP, Rutledge RB, Zaehle T, Schmitt FC, Kopitzki K, Kowski AB, Voges J, Heinze HJ, Dolan RJ. (2015). No unified reward prediction error in local field potentials from the human nucleus accumbens: evidence from epilepsy patients. J Neurophysiol. 114(2):781-92.

Zalocusky, K., Ramakrishnan, C., Lerner, T., Davidson, T., Knutson, B., & Deisseroth, K. (2016). Nucleus accumbens D2R cells signal prior outcomes and control risky decision-making Nature DOI: 10.1038/nature17400

Subscribe to Post Comments [Atom]

6 Comments:

At March 28, 2016 12:32 PM, Anonymous Anonymous said...

Regarding the 2016 Kishida paper, yes, it had a novel technological approach. But it didn't provide factual evidence for its Significance section statement:
"The observed compositional encoding of "actual" and "possible" is consistent with how one should "feel" and may be one example of how the human brain translates computations over experience to embodied states of subjective feeling."

The subjects weren't asked for corroborating evidence about their feelings. Evidence for "embodied states of subjective feeling" wasn't otherwise measured in studied brain areas. The primary argument for "embodied states of subjective feeling" was the second paragraph of the Discussion section where the researchers talked about their model and how they thought it incorporated what people should feel.

What the study's model did was to infer these "states of subjective feeling." And we per the fallout from the "The dorsal anterior cingulate cortex is selective for pain" study that "It’s probably a bad idea to infer any particular process on the basis of observed activity."

 
At April 03, 2016 8:18 PM, Blogger Unknown said...

I find the outcome of the experiment with the gambling rats to be interesting. The rats who lost chose the risky level, and the other rats chose the safe side. Who would predict that? Dopamine is what makes us happy. The fact that the rats who gamble for a higher sucrose and lose have a weaker neural response.

 
At April 12, 2016 2:56 PM, Blogger Unknown said...

I knew that Dopamine was addictive just from the research I've done in the past. I did not know about the experiments you spoke of with the rats though. I find it very interesting that rate liked to gamble and that they chose the risky level most of the time. I also found it interesting that those who lost still had a weaker neural response. I'm going to search on this topic some more since this caught my interest so much. Thank you for sharing this. It has really got my attention and I look forward to looking more into this.

 
At April 16, 2016 8:09 AM, Anonymous Anonymous said...

The whole DA as reward is problematic. The best work is done by Salamone. It's just frickin' complicated. http://today.uconn.edu/2012/11/uconn-researcher-dopamine-not-about-pleasure-anymore/

 
At April 29, 2016 6:21 PM, Blogger Unknown said...

I agree that dopamine is an addiction. I'm no so sure about dopamine as a reward though. I did not know that rats would gamble to get a larger reward and that they would have a weaker response after losing. Considering that there is a greater chance of getting a losing response, it would make sense for them to have a weaker response and knowing that they have a lower chance of getting a high amount.

 
At April 30, 2016 5:21 PM, Blogger Unknown said...

I found this article to be very interesting. It makes sense that Dopamine plays a role in reinforcement. Addiction is very complex, people get addicted to that dopamine high. Gambling addiction runs in my family and I just don't get it. I work to hard for my money to throw it away. I enjoy playing the lottery or buying scratch-off tickets from time to time and it is exciting and exhilarating to win, but I must have a strong neural response because I don't get enough out of the small chance of getting a larger reward. I have to wonder about this study though, it doesn't really mention much about the method only the results. It would be interesting to read the full study and see if there were any extraneous or confounding variables at play in the rats who chose the larger reward.

 

Post a Comment

<< Home

eXTReMe Tracker