Martin Rees: Will technology’s dark side eclipse humanity’s bright future?

Martin Rees 10th anniversary.

In November 2015, Martin Rees gave the Oxford Martin School 10th Anniversary Lecture [here’s the video, here’s the transcript]. The theme of the lecture is that the 21st century is special — let’s make sure we get to the other side intact. We humans have technologies under development that make me think of Stewart Brand’s famous quote in the Whole Earth Catalog “we are as gods, we might as well get good at it”. Today Stewart says

“What I’m saying now is we are as gods and have to get good at it.”

We have to get good at our job because our technologies, from fossil fuels to biotech to AI, give us the opportunity to screw it up. So we need to pay very close attention to making our way successfully through the next 100 years. Lord Rees:

Why is the 21st century special? Our planet has existed for 45 million centuries, but this the first when one species – ours – can determine the biosphere’s fate. New technologies are transforming our lives and society – they’re even changing human beings themselves. And they’re inducing novel vulnerabilities. Ecosystems are being stressed because there are more of us (world population is higher) and we’re all more demanding of resources. We’re deep into what some call the Anthropocene.

And we’ve had one lucky escape already. At any time in the Cold War era, the superpowers could have stumbled towards nuclear Armageddon through muddle and miscalculation. Robert McNamara, US defence secretary at the time of the Cuba crisis, said after he retired that “[w]e came within a hairbreadth of nuclear war without realizing it. It’s no credit to us that we escaped – Khrushchev and Kennedy were lucky as well as wise.”

This is a terrific lecture, applying science-informed optimism to the benefits and risks of some of our most powerful technologies.

For the rest of this talk I’ll focus on a different topic –the promise, and the dark side, of novel technologies that change society and empower individuals – and I’ll venture some speculations about the far future.

We live in a world increasingly dependent on elaborate networks: electric-power grids, air traffic control, international finance, just-in-time delivery, globally-dispersed manufacturing, and so forth. Unless these networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdowns — real-world analogues of what happened in 2008 to the financial system. Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel can spread a pandemic worldwide within days. And social media can spread panic and rumour literally at the speed of light.

It’s imperative to guard against the downsides of such an interconnected world. Plainly this requires international collaboration. (For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness.)

On pandemics, Oxford Martin colleague Larry Brilliant has taught us how critical it is to invest in “early detection, early response”. Early detection is enabled by the growing power of our networks. Early response is enabled by human and physical infrastructure, and by investing in molecular biology so that we can rapidly analyze detected pathogens, then formulate and manufacture vaccines or antiviral compounds.

One of Martin Rees’s concerns is malign biotech, especially since CRISPR.

Malign or foolhardy individuals have far more leverage than in the past. It is hard to make a clandestine H-bomb. In contrast, biotech involves small-scale dual use equipment. Millions will one day have the capability to misuse it, just as they can misuse cybertech today. Indeed, biohacking is burgeoning even as a hobby and competitive game.

So what do we do about this risk? Regulation is useless for controlling the behavior of the “malign or foolhardy”. In fact we want to be very mindful that we do not entangle our best researchers in a net of over regulation. Because our best defense is exactly our rapid detection-response capabilities created to minimize the impact of natural pandemics.

What about the benefits and risks of advanced AI, specifically General Artificial Intelligence (GAI)?

The timescale for human-level AI may be decades, or it may be centuries. Be that as it may, it’s but an instant compared to the future horizons, and indeed far shorter than timescales of the Darwinian selection that led to humanity’s emergence.

I think it’s likely that the machines will gain dominance on Earth. This is because there are chemical and metabolic limits to the size and processing power of ‘wet’ organic brains. Maybe we’re close to these already. But no such limits constrain silicon based computers (still less, perhaps, quantum computers): for these, the potential for further development over the next billion years could be as dramatic as the evolution from pre-Cambrian organisms to humans. So, by any definition of ‘thinking’, the amount and intensity that’s done by organic human-type brains will be utterly swamped by the future cerebrations of AI.

Moreover, the Earth’s biosphere isn’t the optimal environment for advanced AI – interplanetary and interstellar space may be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological ‘brains’ may develop powers than humans can’t even imagine.

But we humans shouldn’t feel too humbled. We could be of special cosmic significance for jump-starting the transition to silicon-based (and potentially immortal) entities, spreading their influence far beyond the Earth, and far transcending our limitations.

So, even in this ‘concertinered’ timeline — extending billions of years into the future, as well as into the past — this century may be a defining moment where humans could jeopardise life’s immense potential.That’s why the avoidance of complete extinction has special resonance for an astronomer.

That’s the rationale for the Future of Humanity Institute, the element of the Martin School that addresses ‘existential’ risks on the science fiction fringe.

Watch or read, and please tell your friends. We really, really need to focus much more energy on long term thinking.

I almost forgot to mention that Martin Rees is a cofounder of another prestigious risk research institution, the Centre for the Study of Existential Risk at Cambridge.

More on the Oxford Martin School. Lastly, good news: our home star is good for another six billion years. Just imagine what we can accomplish before we are forced to move!

Lord Martin Rees in conversation at The Wellcome Trust


It’s dependably fun and illuminating to see Martin Rees unconstrained by the political “don’t go theres”. So, have some fun with this Lord Martin Rees in conversation at The Wellcome Trust [13 June 2014, 93 minutes]. Skip the first 10 minutes of formalities. Then as the interview begins with ‘where it all started’ Martin explains the basic principles of his grad school experience in the 1960s:

If you go into an established field you can only do something new by doing something that the old guys got stuck on.

Whereas if you go to a field where new things are happening, then the experience of the old guys is at a heavy discount.

Max Planck’s longer quote can be paraphrased as Science advances one funeral at a time.” I had to endure only part of that experience, as advisor Woody Bledsoe would try anything promising. But my mathematics chair was a classical guy who insisted that thesis exams concentrate on partial differential equations. Very relevant to planning the Juno rendezvous with Jupiter, not so helpful in computer science. Here’s the challenge: how do we develop young scientists without trapping them inside the boundaries of the department hierarchy?


David MacKay “Perhaps my last post – we’ll see”

David MacKay

Prof. David MacKay has done more than any other human to improve our understanding of practical energy policy. His famous book Sustainable Energy Without the Hot Air is on the bookshelf of everyone who is seriously interested in making the future better. 

Yesterday David wrote:

I noticed that the posts of a friend who died of cancer trickled away to a non-conclusion, and this seems an inevitable difficulty, that the final post won’t ever get writ.

I’d like my posts to have an ending, so I’m going to make this my final one – maybe.

Ever the scientist, he has been documenting his experience as a cancer patient. For example Bye-bye Chemotherapy, Hello TP53! explains how he and his oncologist discuss prospects and options. I hope that David recovers so well that he can write a new book – a scientist’s perspective on how he became a former cancer patient.

Big Organic mounts Asymmetric Warfare attack on public scientist Kevin Folta

There are misrepresentations in this PLOS BIOLOGUE guest post that need to be promptly corrected. Dr. Folta has written a brief analysis of these issues at Science20 Transparency Weaponized Against Scientists.

“Weaponized FOIA” is an appropriate term for the harassment tactic devised by Gary Ruskin and his organic industry backers. Very simply this is “Asymmetric Warfare” against forty public scientists. The attackers have whatever resources they may need – including funding for public relations firms and lawyers. Dr. Folta has only his own personal resources to defend his reputation. He doesn’t have the option to just turn over his defense to a team of professionals.

I am especially outraged at this harassment for alleged lack of transparency. I have been reading Dr. Folta since at least 2012. Why? Because when I undertook to understand the risks and benefits of modern agriculture my first task was to identify scientists that I could trust. My doctorate is Computer Science – with no training in molecular biology or horticulture. But I know how to find expertise in other fields. I find some candidate scientists that look to be credible, then put some hours into Google Scholar looking for papers and citations. It’s not rocket science to discover the researchers who have the respect of their colleagues. Then over time it’s a matter of looking at the quality and logical consistency of arguments.

For example, early on I found Penn State molecular biologist Nina Federoff. Looking at her work and CV I noted that she was a recipient of the U.S. National Medal of Science. Perhaps she is a pretty good choice for a scientist to trust. By following her citations to the work of other scientists a web of references develops. That’s how I came across prof. Kevin Folta.

Dr. Folta is very unusual in the research community because he invests a quite remarkable amount of unpaid effort into science communications. RSS is your friend for harvesting information generated by scientists like Dr. Folta who publish frequently on a personal blog, give public lectures, record podcasts, etc. All of the writing and presenting that I found – you can find too. If you do that you will quickly confirm my finding that Dr. Folta is objective and transparent to a level that sets a standard for the rest of us to live up to.

From my experience it is very clear why special interests promoting an anti-science agenda will want to discredit Dr. Folta. Hence the Asymmetric Warfare on his reputation. You can verify my claim by reading his blog Illumination and listening to his new podcast Talking Biotech. If you do that you will see that this man is not a shill for any special interest. He is exactly the sort of objective scientist that you are looking for.

Plant Science Expert Panel


Sense about science is hosting a valuable Q&A between the public and a panel of plant scientists. You can participate

Send questions via our online form, Twitter to @senseaboutsci using #plantsci, or email us at

The linked science panel page is comprised of expert answers attached to the public questions. As you would expect given the makeup of the UK science panel, the quality of the responses is high. I’ve selected one example demonstrating the nuance offered by Professors Jones and Leyser:

“The environment secretary, Liz Truss, has said that US farmers growing GM crops use less water and less pesticide. Is she right to say this?”

Prof Jonathan Jones:
GM is a method that can be used to confer many different and useful traits. Liz Truss is right to say GM crops can reduce the environmental impact of agriculture. In the US, Bt maize and Bt cotton require less insecticide to control insect pests. Glyphosate (Roundup) is a less damaging herbicide than the herbicides it replaced. Unfortunately, like antibiotics, reliance on one compound (glyphosate) has selected herbicide-resistant weeds in the US, reducing glyphosate effectiveness for weed control. Another GM trait in maize has been used to elevate tolerance of drought stress. For the UK, blight resistant potatoes will require less fungicide applications, and there are many other potential nutritional and agronomic benefits that could be conferred using the GM method.

Prof Ottoline Leyser:
Farming practices associated with each GM crop differ, depending on the specific characteristic that has been introduced. It is therefore not meaningful to state that growing GM crops results in less water and pesticide use, because it depends entirely on which GM crop you are talking about and what the normal practices are for the equivalent non-GM variety. For example, there is very good evidence that the use of GM cotton engineered to resist insect attack has reduced the use of insecticides in cotton production compared to previous practice. However, the use of GM technology to increase vitamin A production in rice does not affect how much pesticide is used.

One type of crop where there has been a particularly vigorous debate about the environmental impacts is herbicide-tolerant crops. There is no doubt that the widespread planting of herbicide-tolerant crops has led to an increase in the use of the specific herbicides tolerated by these crop varieties. Some argue that this has reduced the use of alternative, more environmentally-damaging herbicides. Others point to the negative effects on insect biodiversity caused by the reduction in weeds associated with more effective weed management. Others still highlight the emergence of weeds resistant to herbicides. These debates are important, but they have nothing to do with GM. While many herbicide-tolerant crops have been produced using GM methods, others have been produced using conventional methods, and all the arguments are about herbicides and their use, not about GM. Just as it is inaccurate to say that GM crops reduce pesticides, it is equally inaccurate to say that they cause superweeds.

We need to be able to choose the best solutions to each challenge facing the food supply chain. In some cases this will include a particular GM crop, but in many cases it will not.

Science funding is broken. Thinkable wants to help fix it.

Logo thinkable

Thinkable is a promising new crowd funding connection between researchers and sponsors (including the public at large). Founded by oceanographer and chief scientist Ben McNeil “the idea for Thinkable comes out of Ben’s frustration over the lack of funding for basic research and a passion for blue-sky thinking.” Ben’s recent arstechnica essay is a good introduction to why he believes science funding is broken; paired with the solution proposed by Thinkable: Is there a creativity deficit in science? If so, the current funding system shares much of the blame.

I won’t try to outline how the Thinkable platform and ecosystem works — the Thinkable website is very well-designed, so you’ll learn more about the venture by just jumping in — and be sure to sign up in your role as a sponsor or a researcher. I decided that the best way to evaluate Thinkable is to participate: I’ve subscribed to sponsor Martin Rees whose current project funding is passing the 50% level: How can we stop blood vessels becoming damaged and sticky during inflammation?

Here’s some of the reasons I’m excited about Thinkable:

  1. Taking risks is absolutely fundamental to real progress in science and technology. The existing institutional funding channels are highly risk averse — “crazy ideas need not apply here”.
  2. The path to breakthroughs is cobbled by mistakes. Mistakes are where most of the learning happens.
  3. Those characteristics are familiar to entrepreneurs who are successful innovators. The venture capitalists who consistently make superior returns know this very well. That’s why Silicon Valley slang is peppered with phrases like “Fail fast” and “Pivot”. “Let’s invest through the pivot” has probably been spoken more than once by a VC looking at superb founders (translation: “these guys are so good we want to work with them, even though their idea is probably going to fail”).
  4. Thinkable looks to be administratively very lightweight — so that funding goes to support research, not overheads. I understand that 87% of sponsor funding is delivered: after 10% to support the Thinkable platform and about 3% for payment processing fees (Visa etc. Note that once Thinkable can rely on crypto currency payment processing that 3% will fall near to zero) 

In my next post on Thinkable I hope to be able to explain who is funding the venture. Please help spread awareness of the Thinkable platform. If this takes off in a big way we could be helping to Change the World.

Chemical-Free Products: The Complete List

Derek Lowe

Here’s a comprehensive review of chemical-free consumer products, courtesy of Nature Chemistry. I’m flattered to have been listed as a potential referee for this manuscript, which truly does provide the most complete list possible of chemical-free cleaners, cosmetics, and every other class of commercially available product. Along similar lines, I can also recommend (…snip…)

Source: by research chemist Derek Lowe. Derek is a remarkable insider source on preclinical drug discovery.  

“Fallacy is instantaneous but truth works at the speed of science”


Do you recognize the thumbnail at left of the Fukushima radiation plume spreading all over the Pacific? If that’s what you think the image is you definitely will want to read on. If you recognize the thumbnail as the NOAA tsunami wave height model published the day after the Tohoku earthquake — then I hope you find some useful resources here.

I wish I had written No, but in all seriousness…. But I’m very happy that Alistair Dove did write this essay on critical thinking.  This is one of those pieces that we are so happy to find and share! Alistair sees a cross section of flawed reasoning in the comments that appear on the group blog Deep Sea News. E.g.,

… examples of the sort of reasoning that we have seen in comments, emails and tweets about the above examples:

  • Starfish wasting disease. Starfish are melting. Radiation leaked into the ocean at Fukushima. Therefore Fukushima caused the starfish melting.
  • Hurricane/Superstorm Sandy. Hurricane Sandy happened. Then dolphins began dying on the Atlantic coast. Therefore Sandy caused the Atlantic dolphin UME.
  • The “great Pacific garbage patch”. There’s a giant patch of garbage out there. If we could just sort of scoop it up, that would be good. Someone should invent something to do that.
  • The Long Island Sound lobster fishery. “They” sprayed insecticides in the tri-state area to control mosquito populations. Around the same time, lobsters died. Therefore insecticide spraying killed lobsters.

I see the “Backfire Effect” every day: 

A related problem is that in the time between when people first propose a fallacious cause, and when the true cause is revealed through reason and research, the fallacious one can become ingrained like an Alabama tick.  Once people get an idea in their head, even if it’s wrong, getting them to let go of it can be bloody hard.  Indeed, there’s a term for this; it’s called “the Backfire Effect”: when confronting someone with data contrary to their position in an argument, counter-intuitively results in their digging their heels in even more.  In this phenomenon, the media has to accept a sizable chunk of responsibility because, as the lobster example shows, the deadline-driven world of media agencies is more aligned with the rapid pace of the logical fallacy than with the slow and deliberate pace of scientific research.

Alastair closes with a checklist that we should share where it may do some good:

…so it can’t hurt for all of us to think consciously about our thinking, me included. To that end, I offer the following, non-comprehensive list of things to consider before you hit “Reply” on that cleverly crafted response. If you have additional suggestions I invite you to add them in the comments.

  • Am I seeing a pattern that could just be a statistical rarity, and leaping to a conclusion?
  • Am I connecting two events causally, because they occurred close together in space or time?
  • Am I inferring a cause in the absence of evidence for any other explanation?
  • Am I thinking inductively “It must have been such and such…”
  • Am I framing the issue as a false dichotomy (debating only two possible causes, when there may be many others). In other words, am I framing the issue as an argument with two sides, rather than a lively discussion about complex issues?
  • Am I attacking my “opponent” and/or his/her credentials, rather than his/her argument?
  • Am I arguing something simply because other/many people believe it to be true?
  • Am I ignoring data because I don’t want to lose face by conceding that I may be wrong?
  • Am I cherry picking data that support my position (a cognitive bias)

So, I hope that’s enough to motivate you to read Alastair’s No, but in all seriousness… You will be happy you gave it your attention and reflection.

Three climate scientists examine recent slowdown (or ‘pause’) and online science communication

The recent slowdown (or ‘pause’) in global surface temperature rise is a hot topic for climate scientists and the wider public. We discuss how climate scientists have tried to communicate the pause and suggest that ‘many-to-many’ communication offers a key opportunity to directly engage with the public.

I recommend “Pause for thought” in Nature Climate Change. This very short essay by Ed Hawkins, Tamsin Edwards and Doug McNeall is ungated, after free registration. You can get a preview of the technical overview by studying the two following charts carefully. You’ll need to pay attention to the chart key underneath – there is a lot of information compressed into the two panels.


Observed global mean surface air temperatures (HadCRUT433, solid black line) and recent 1998–2012 trend (dashed black line), compared with ten simulations of the CSIRO Mk3.6 global climate model, which all use the RCP6.0 forcing pathway (grey lines). The grey shading represents the 16–84% ensemble spread (quantiles smoothed with a 7-year running mean for clarity); the ensemble mean trend is around 0.20 °C per decade. Two different realizations are highlighted (blue), and linear trends for specific interesting periods are shown (red, green, purple lines). a, The highlighted realization shows a strong warming in the 1998–2012 period, but a 15-year period of no warming around the 2030s. b, The highlighted realization is more similar to the observations for 1998–2012, but undergoes a more rapid warming around the 2020s. Note also that this realization appears outside the ensemble spread for 9 out of 10 consecutive years from 2003–2012.

The charts and discussion illustrate a central truth of climate science – the results are often only understood in a framework of statistics. The pretty, clean projected temperature curves that we see in the media are heavily smoothed over many runs of multiple models. That presentation conceals the natural variability that is part of the challenge of understanding, then testing hypotheses against observations. It is similar to the agonizing process at the Large Hadron Collidor (LHC) as the teams tried to develop enough data to tease out a sufficiently confident identification of an anomaly corresponding to the Higgs.

If you have a specific question about the authors’ presentation, you can ask the scientists directly on twitter. It is uncommon for authors to reveal their twitter handles in a paper, so please don’t make them regret the open door!

I recommend two other articles in this Nature Climate Change series:

1. Heat hide and seek [PDF] Natural variability can explain fluctuations in surface temperatures but can it account for the current slowdown in warming? The authors offer an excellent summary of the more promising current research, including particularly the variability in heat distribution such as

  • El Niño/Southern Oscillation
  • Pacific Decadal Oscillation
  • Atlantic Multi-decadal Oscillation

2. Media discourse on the climate slowdown where I learned among other things that the biggest recent media spike seems to be in Oceania – where we are presently (cruising). Australia has been suffering from a severe drought – that no doubt generates increased interest in climate.

What do we know about the Tohoku tsunami debris field?

Click the model static frame to display the animation

We have a direct personal interest in evaluating the 2014 risk of debris-collisions in the captioned North Pacific area in 2014. We will be sailing from NZ to Alaska again via Tahiti and Hawaii. There is always ocean debris – plastic flotsam, and dangerous debris from fishing nets and fishing gear to shipping containers. And we have first-hand experience with the too-common incidence of logs and trees floating near shore and in the channels of the Pacific Northwest.


The reason we are concerned about the open ocean Hawaii-Alaska tsunami debris is the possibility of high incidence of the semi-submerged heavy objects that can put holes in the boat. The alert watch-keeper will not see these dangers ahead when sailing fast at night. Collision with logs, trees, fuel drums, docks at 12-15 knots — that’s an event we would prefer to avoid. The IPRC sightings describe the Kalama Beach log at left as “Large log, length app. 20′, diameter app. 3′.”

It’s challenging to assess whether dangerous-collision risks are materially greater post-tsunami. What is missing is the perspective: “how different is it today from background?” I’ve analyzed that question regarding the radiation/contamination risks. E.g., the media hyperventilating about Fukushima impacts on the Pacific Ocean. I think I understand the Pacific Ocean contamination risks fairly well. The contamination analysis is easier because we have a sound series of observations combined with a good theory.

We are accustomed to access to data on ocean conditions based upon satellite observations. Near real-time temperature, currents, and sea state are available for most of the planet. Unfortunately the satellite imagery doesn’t tell us about the collision hazards that could be waiting for our yacht (two hulls, twice the collision opportunity, right?). The reason is that the Tohoku debris is so dispersed and physically small that it has not been detectable by satellite imagery since shortly after the March 2011 tsunami. The NOAA special website on the Tohoku debris is a good place to begin your research. Next is the NOAA Severe Marine Debris Event Report: Japan Tsunami Marine Debris of June 2013.

Because of the size of the potential debris area and the narrow coverage of high resolution satellite imagery, it became clear that a full coverage survey of the situation was not possible or practical. Shortly after the tsunami, a variety of types of satellite imagery (e.g., ENVISAT, LANDSAT, SPOT, ASTER) became available from a variety of sources, including U.S. Geological Survey, the International Disaster Charter, the European Space Agency rolling archives, and a joint NASA/JAXA Web site. This imagery was analyzed by …(NOAA) NESDIS … and indicated fields of debris that were visible in 15–30 m resolution data. More recently, the National Geospatial-Intelligence Agency (NGA) has provided NOAA with higher resolution (1–5 m) satellite imagery.

Although debris fields are no longer visible by satellite, dispersed, bouyant items continue to float in the North Pacific. This assorted debris, referred to henceforth as Japan Tsunami Marine Debris (JTMD) ranges from derelict vessels and large floating docks to small household items, with fishing gear and construction items of various sizes and compositions in between…

Because we do not have wide-field satellite data we know very little about the actual mass and spatial density of the debris distribution. Our data sources are primarily coastal observations of objects deposited on beaches (Midway, Hawaii, NA mainland of CA – OR – WA – BC – AK, plus a few random observations from sailors at sea.

Oceanographic modeling:

NOAA GNOME model: With so little empirical data our picture of what is out there in the Pacific comes largely from modeling. There are two primary models that we have identified, the NOAA GNOME model and the IPRC model. NOAA periodically updates their summary page about the latest GNOME predictions: NOAA Tsunami Debris Model Sep 2013. The NOAA update page is a concise introduction to the modeling. The next graphic shows the big picture, extracted from that NOAA page. The hatching denotes the predicted highest density of debris with 1% windage at the end of the simulation. 


IRPC Model: This model is publicly available, run by the International Pacific Research Center Marine in the School Of Ocean And Earth Science And Technology at The University Of Hawaii. My understanding is that IPRC updates the model initialization and parameters based on observations – so the near term model frames are closer to reality than far future. The model doesn’t really tell us about concentrations of threatening semi-submerged objects. It does however give us a visualization of what we can infer from the current/wind drift physics.

Click the model static frame to display the animation

This link to the IPRC Tsunami Debris Models page provides a tabulation of the most recent model runs for 0% to 5% windage levels. The January 2014 prediction frame correlates with the reports of deadheads in the area from Hawaii to the west coast of the US. The visualization shows the higher windage debris rapidly drifting east to the shorelines of Alaska to California, while the deadheads and similar collect in the area east of Hawaii.

 On the ground observations:

In sailors terms, 1% windage is typically a “deadhead”, an almost totally submerged tree or equivalent timber [Windward Oahu report]. That is what the owner of the 72-foot trimaran Lending Club/Tritium Racing, John Sangmeister, thinks they hit multiple times during the 2013 Transpac race from Los Angeles to Hawaii.

The image at left is of one of the damaged foils aboard Lending Club. We do not wish to be taking such snapshots on our passage through those waters. On the collisions, Kimball Livingston reported:

The first time they hit, they slipped the daggerboard out of its housing, flipped it over and went on with business. Then they hit another time, and re-flipped it, and they’ve hit a few more times without major damage, but wow. The all star crew included Gino Morrelli, Howie Hamlin, Ryan Breymaier and Peter Stoneberg. One interim report: “Logs, logs, and more logs. Sailing normally, but with a large amount of vibration due to the damage.”

We have similar first hand reports of collisions with semi-submerged object from a Kiwi friend who was sailing in the same Transpac fleet. During daylight they could see logs and trees in various stages of submersion. 

More reports from three other 2013 Transpac yachts:

**Sighted 15′ chunk of floating telephone pole.
s/v Ciminal Mischief

**Sighted 35′ floating tree trunk.
s/v Between The Sheets

**Large pieces of debris, a couple of pieces of lumber looked like parts of a house

**Struck what may have been a 10′ section of telephone pole.
s/v Manatea

So, what do we know from “on the ground” observations? Every country and state bordering the North Pacific has debris reporting schemes – based upon shore or near-shore sightings. NOAA collects sighting reports for the North Pacific (via email to if you have a sighting to report). Here are the maps of the sightings reports. Unfortunately “the absence of evidence is not the evidence of absence”. In this case the absence of reports may well indicate an absence of vessels to sight debris. Here is a clip from the November 2013 overview map:


These general sightings reports are not useful to us. An animation showing the date of sighting would give some very general indication of trends – that would be useful. And the majority of the reports are high windage items that aren’t a threat to us. As I write the most indicative observations are from the 2013 Transpac reports like Lending Club. Please email us if you know of any other first-hand sightings reports – especially of the type that threaten small boats like ours.

More Resources:

long log.png


A map of the Gulf of Alaska, BC, Washington and Oregon. An example of predicted debris concentrations (derived from satellite-tracked ‘drifters’) show Alaska is an immense accumulation point. (Figure adapted from Lumpkin et al. (2012))

Image from Alaska Marine Stewardship Foundation


Journey To The Center Of The Gyre The Fate Of The Tohoku Tsunami Debris Field  by Peter Franks in the Integrative Oceanography Division of Scripps Institution of Oceanography. Very descriptive of marine environment impacts, but not relevant to potential impacts with trees and large wooden structures.