I’ve been devouring The Remedy by Thomas Goetz since it came out last week and finished it on a series of long flights this week.
It’s a lucid, accessible popular science book. It’s primarily about two men - Robert Koch and Arthur Conan Doyle - engaged in a debate over whether a tuberculosis cure was indeed a cure or not. If you’re even a little interested in the late 1800s, popular science, the origins of Sherlock Holmes, or the emergence of medicine as a science you’ll probably enjoy it.
For me though, I was most struck by the first few chapters and the remarkably clear unfolding of how certain moments in scientific time can be moments of massive and rapid change. Goetz traces how Koch, a country doctor in Germany with big dreams, integrated theoretical, methodological, and technological breakthroughs to become one of the most famous scientists in the world.
This integration of breakthroughs fascinated me. The theoretical breakthrough was germ theory, and Koch didn’t invent it. It had spent decades burbling from the fringes of science towards the mainstream but was still well on the outside. It threatened the theories of the famous, eminence-based scientific system. And it hadn’t been demonstrably proven. But it was a powerful enough theory that despite the shutout, it continued to develop, quietly, on the edges.
The methodological breakthrough was Koch’s. He figured out how to use pure cultures (the Four Postulates) to determine the cause of infectious disease. In so doing he helped cement germ theory as a cornerstone of modern science and medicine.
The technological breakthroughs laid out in the book may be my favorite part. I knew about the emergence of germ theory, I knew vaguely of the four postulates, but I had no idea of the kind of rapid, on-the-fly invention that the emergence of the culture-based methodology spurred. It is straight out of Eric Von Hippel.
The example that sticks with me from the book is such a simple one on the surface: the petri dish full of agar. But it arrived after Koch began culturing anthrax in the aqueous humor of a cow’s eye between glass, moving to gelatin on plates, moving to agar on the advice of a jam-savvy scientist’s wife, finally to round plates with upraised edges. This kind of evolution of technology to support methodology to support theory is packed onto most every page of the first few chapters and just blew me away.
I’m fairly convinced that this is a pattern we’re in the middle of right now. What struck me reading The Remedy was that I think we can identify the methods and the technologies - sequencing, causal statistical analysis, self-tracking, all the stuff that is on the bingo card of a “Big Data” conference attendee.
But I wonder what the theory is. Goetz clearly makes the point that scientific progress is only obvious in retrospect. In the moment, it’s messy, competitive, sometimes downright personally nasty (the Pasteur-Koch animosity is epic!). The methods can seem so much more obvious in the moment than the theory.
I do what I do in the belief, naive though it may be, that the breakthrough methods and technologies of the last 20 years are on the edge of allowing us to prove or disprove new theories about the causation of chronic disease, as Koch’s time did about infectious disease. And it seems obvious in retrospect that germs would emerge as the causative theory. But at the time, it wasn’t. Just like it wasn’t obvious that ulcers were caused by infection.
So what are the theories of chronic disease that are going to be embarrassingly obvious? What are even the candidates? I wonder.
(disclosure: This post is about Jane McGonigal. I’ve met Jane in person twice, and we follow each other on twitter. we have spent about 10 minutes total in each others’ company - we are friendly, though we don’t know each other well.)
Jane McGonigal, a well-known gamer and advocate for the good that games can bring to people’s health, put up a webpage recently. It’s titled “Play, don’t replay!" and it’s intended to broadcast the existence of a study that established a small, but statistically significant, connection between playing games like Tetris and easing post-traumatic stress disorder.
It’s a neat theory. I spent some time in treatment for traumatic stress disorder and looked into eye-movement desensitization and reprocessing as a therapeutic intervention, and there is some real evidence that EMDR works. It makes intuitive sense to me that games, especially ones that inspire a visual twitch like Tetris, could trigger some of the same effects.
Jane came under some withering criticism for putting up the page. Much of it is gaslighting and I won’t link to it. The criticism that interests me comes from Brendan Keogh, who lists himself as a PhD Candidate in Game Studies at RMIT University in Australia, and who called the page “shockingly unethical and irresponsible.”
Here’s the thing. What’s ethical or responsible depends on where you live, where you work, and what your goals are. What’s ethical is changing on us, in real time, thanks to social media. And charging that someone is shockingly unethical and irresponsible, as Brendan did, is serious stuff. It’s about the worst thing you can say in academia (perhaps only plagiarism is worse).
But here’s the thing. It’s not clear to me that the page constitutes research under U.S. law. I can’t see anything on the page that says the point of the page is “a systematic investigation … designed to develop or contribute to generalizable knowledge" - which is what our laws define as research. It’s not systematic. It’s not promising to publish results. So the law’s ambiguous to me here.
And it’s really important, this definition. Because the whole point of the criticism seems to be about research ethics (as opposed to, say, Aristotle’s Ethics). So whether or not this is research is really relevant to its ethics.
Besides governing law, institutions control for their own liability, which means that Game Studies researchers would probably have to get institutional review for something like this even if it’s not research under the law. But Jane doesn’t work at a research institution, which means she’s not subject to institutional review. If this had been a Huffington Post piece promoting the article and asking for people to leave their experience in the comments, it wouldn’t be much different.
Now it’s entirely fair to question that Jane should have taken some more time to think about whether or not she’s covered, if this is human subjects research, if she should get independent review. If her internet stature imposes an obligation. I think that would have been smart, and I’ll come back to that later in this post. But that’s the thing. It’s arguable.
And arguable is a long way from "shockingly unethical."
Reading the piece it feels like there was a pre-existing allergic reaction to the “games evangelism industry” that colored the reaction to the page in question. The first version of the piece even added an Upworthy twist to the page’s description of “one simple technique” by converting it to “one simple trick” (this may be an example of priming).
I have run into this allergic reaction for years in the “harder” sciences (biology especially). There is a real distaste for connecting directly to people via social media, a distaste that I believe has at least some origins in ethics training. I’d imagine Brendan has had ethics drummed into him by his university (likely the Australian version of research ethics, which does seem to have a larger idea of research than US law).
Research ethics require us to get informed consent, assess risks and benefits, and perform selection of subjects - none of which are explicit in Play, don’t replay. And as someone who works nearly full time on informed consent, that does nag at my senses. I’d like to see more of those elements drawn in, more of a sense of responsibility incorporated.
But I can’t get past the idea that this isn’t clearly research. It’s talking to people. And the internet has changed the way we talk to people. Talking to people over twitter reaches more people than a clinical trial if you’re Jane. When Amanda Palmer has a twitter chat about sexual violence, it reaches several orders of magnitude more people than a sexual violence research study.
That reach itself doesn’t make it a study.
I also can’t get past the idea that this isn’t clearly not-research either. There’s enough dancing near the creation of knowledge that, with the right eyes, one could say this is a page that should have been reviewed by an ethics committee.
I would love to have seen both parties do something different here.
I think Brendan’s accusation of shocking unethical irresponsible behavior ignores local context about what is research and where research ethics kick in. If you’re going to criticize someone’s ethics, you must first attempt to understand their context. I see no evidence of that in the criticism, and that bothers me.
I also think Jane’s page brushes close enough to research that she should have run it past someone (not me, someone who does social science and social media) to get an ethical review. I do not think it’s unethical, though.
The real reason I think she should run it by ethical review is because of her reach. I think that reach imposes an obligation, an obligation that has never existed the way it now exists.
There is a real possibility for abuse in this space by those who have social reach. Indeed I think this possibility is part of the criticism leveled by Brendan, as he repeatedly notes that he believes in her good intentions. The shockingness here is not attributed to intention, which is an interesting point of intersection. Jane could be a leader in how to use social reach ethically. I would love to see her do it - there’s not a lot of candidates who could do it better than she could.
But in the general context…the line between “just talking to people” and “doing research” is dissolving.
We never had to even deal with that line. It was there because only credentialed researchers could hit scale in talking to people. They could raise money, they had structures to recruit. Now Jane’s got the structures to recruit, and it’s costless to contact. Now talking-to-people can brush right up against the edge of doing-research, with all the attendant ethical questions swept up into the engine, with none of the systems functioning and none of the people talking to each other about the real problem.
We need to have a serious conversation about what the dissolution of that line between research and conversation means. Research has much to teach conversation. But - and this is essential - conversations at scale have much to teach research. I would submit that conversations at scale are simultaneously the most powerful form of research that we have yet invented and a form of research that is totally outside our ethics, because it is so new.
This needs to be a two-way street if traditional, university-oriented research wants to survive. Because conversations at scale are going to eat it alive if the academy tries to pick the wrong fight.
The Synapse software we run at Sage Bionetworks is open source software.
That’s a statement that has a certain set of expectations that we provide you, the user, with some serious powers: you can download the code from github under the Apache 2.0 License, you can grab our bug tracker. We’ve invested in developer documentation.
That’s what “open source” means. You can get our code. You can change our code. You can redistribute those changes. You can sell our software and never tell us or send us a check. There is an Open Source Definition that spells this all out clearly, arrived at via community consensus, and long used to adjudicate claims of open source-ness.
There’s a rub, though. The definition is based on the distribution terms of the software - the legal tools that wrap around the code and govern its use and reuse. And we comply with all those terms.
But Synapse has been built from the bottom up as a cloud-based service, Our users love that aspect. Our developers develop for that. And that creates a very interesting conundrum. We run a product that is clearly, legally, technically open source. But the definitions that govern our ability to make that claim about a piece of software don’t reach anywhere into the operations of that software - the way that we run it.
This has big long-term implications. It’s thus possible to market an organization as an open source software organization, yet to architect a radically closed service built on it. It’s not what we do at Sage, or where we’re going, but it’s something that we have learned, almost by accident, that’s entirely doable. The Free Software Foundation has written about some elements of the relation between services and freedoms. But there’s not a roadmap for an organization that wants to run an open service for free.
We need to watch out for for Fauxpen Source here. Not everyone can stand up an Amazon instance and run a cloud service, even if they’ve got the license rights to do so. Not everyone has steady internet. Not everyone will pay attention to the details when they see an open source stamp. Not everyone will pay for open services.
And costs are a big part of the rub. Open source in the old technology model is inherently scalable at a fairly low cost other than the costs of development, which can be either paid or volunteer. The usage of the software doesn’t add much cost. If everyone is downloading and running code locally, then you don’t need a big budget for users and their usage. In a service context, our costs scale with our users - maybe linearly, but given that we store genomes and genome sequencing is cheaper every day, maybe exponentially.
Open as a service is where we’re going. But no one has a map yet. If anyone’s got ideas, please get in touch.
I’m sitting in the Vancouver airport winging home from TED 2014. It’s still going on, but I only had a day pass for yesterday as we gathered DNA samples for the soft launch of the Resilience Project (a collaboration between Sage Bionetoworks and Mount Sinai).
I don’t often take DNA swabs, but when I do, I kneel on the floor and furiously barcode TED attendees DNA swabs. Elissa Levin, Stephen Friend, Lesa Mitchell, Diana Friend. Not pictured: Linda Avey, who was on the floor with me.
Thanks, TED (and especially Priscilla, the ever-gracious speaker concierge) for letting us launch our project!
As we were decompressing, Charlie Rose started interviewing Larry Page. It was a very wide-ranging interview, but the part that stuck out to me was Page’s desire to have a massive pool of medical records available for research purposes.
This is obviously near and dear to my heart, and to many others. TechCrunch picked up on it and was gracious enough to quote me in their coverage.
But it was the phrasing that stuck like a splinter in my brain. At no point was this mass of records about the people referenced by those records. It was as if the data had no relation to the people other than as grist for a vast mill, something to be turned into insights that only then would help people. There was a desire for total disconnect between the medical records and the very real, very human beings whose cholesterol and hemorrhoids were described therein.
This is often done in the name of protecting the privacy of those people. But as we’ve seen, anonymization of records doesn’t mean the records are anonymous in the hands of skilled attackers.
I think the real reason is that it’s just easier. It solves for the privacy laws, if not the privacy problems, and it lets those analyzing the data treat it as an economic and computational resource without moral dimensions. I don’t blame a company for taking that stance. It’s easier.
Easier isn’t always the answer. My own TED talk, from 2012, was about looking into the mirror and making informed consent the centerpiece of pooling medical data.
We can do this. It’s what underpinned the first version of the Portable Legal Consent study. It’s underpinning the Resilience Project. It’s underpinning the Bridge platform at Sage.
But it won’t happen - we won’t do it - if fall into the trap of thinking about “50,000,000 anonymized records.”
One medical record is a person. 50 million is a statistic. It’s a lot easier to toss moral and ethical concerns away when the numbers get big. But no matter how many records we have, each one came from a person. And she deserves a voice, a consent, an engagement, with the research she empowers.
Tomorrow, my dad will be honored at Trinity University in San Antonio as a distinguished alumnus. We’re flying the whole family down to celebrate with him, but I wanted to take a moment to celebrate him here. Since I live on the web most of the time, and few of you know his story, I wanted to tell it.
The Trinity blurb is remarkably apt and pithy:
His occupation is hard to describe: mainly a scientist - at the intersections of the social and environmental sciences - but also a research program manager, provider of technical assistance, and institution-builder - with one foot in the world of research and one foot in the world of practice.
That’s about right. I remember learning the words “sustainable development” from him around the time he taught me to swing a baseball bat, which means late 1970s. My dad drew me into libraries in the 1980s, where I made first contact with networked computer systems, as I helped him find books and images for speeches he gave. His work picnics were places where I stepped on the feet of people who dated back to the Manhattan Project, which is part of being a national labs kid at a place like Oak Ridge.
My dad has gotten a lot of notice in his own fields, but the biggest thing, the thing that makes others realize that he’s a pretty towering guy, is when the Nobel committee formally recognized him as a co-laureate in the 2007 Peace Prize shared by the IPCC and Vice President Gore. His career is remarkable, and he’s not showing any signs of stopping.
Here he is after doing me a favor - chairing the first Science Commons meeting we ever hosted, at the US National Academy of Science. Thanks, Dad! We also co-wrote one of my favorite papers, on open access and sustainable development.
He’s also a pretty normal guy who hangs out in sweatpants at home, likes salty snack treats, and is the first to get silly with my 3-year-old son. One of my favorite pictures of him (which I need to scan) is him, tie askew, in the California sun and wind, sometime in the early 1980s, on a road trip from San Francisco to Los Angeles for the whole family. His tongue is out, his eyes are serious, he’s intent on something. And he’s entirely there, in that moment, no attention elsewhere. That’s my memory of Dad childhood: silly, present, smart, dignified, and generally speaking, hittable with a water balloon.
Dad grew up in the brown lands just after the dust bowl and the Depression - the Texas high plains always stand out in his stories. He graduated from high school in Canyon Texas in the mid-50s and was ready to go.
(the eponymous canyon, Palo Duro, also known as the grand canyon of Texas, Photo courtesy Destination360 - pretty, but not sure it’s for me either)
He worked more jobs than you can imagine (my favorite remains a summer gig as a hotel nightman), he met my mom in college and they married almost immediately. He served years in the Army, he pulled up the family and moved to India to watch the land react to the green revolution in real time, and in that moment, saw the future of development and the coming crisis of sustainability. We live in a very green place now in East Tennessee, something I think reflects all the way back to those dry places for him.
I am unbelievably, joyously lucky to have grown up with him. He fed me books that I wasn’t supposed to read - Dune in 4th grade stands out - and taught me how to think, how to structure my thoughts, how to find liminal spaces and how to work inside them. I owe much of my career to those skills, and thus, to him. He taught me - he keeps teaching me - every day.
My son calls him Pop, my wife calls him Tom, and my mom calls him “hey you.” But to me and my sisters, he’s just Dad.
And Dad? Congratulations. You’ve earned it. We are so proud of you that it hurts.