Salva tu Playa

La playa de los Llanos está junto al Palmetum y al Parque Marítimo de Santa Cruz de Tenerife capital.

Salva tu Playa

Por playas más limpias y cuidadas, sin ruidos ni molestias.

Salva tu Playa

Por la recuperación de playas.

Salva tu Playa

Recogida de firmas online . Apoya y podrás defenderla mañana.

Por la recuperación de accesos al baño y el disfrute completo del mar

Recogida de firmas online. Apoya y defiende tu playa. Por una playa para todos.

jueves, 27 de diciembre de 2018

Exploring New England's coastal ecosystems in the dead of winter

In early January 2018, a nor’easter pummeled the East Coast. A record-breaking high tide rendered many streets in Boston impassable and seawater rushed down Seaport Boulevard in Boston’s Seaport District. A deluge of water poured down the steps leading down to the Aquarium subway station, forcing it to close.

Less than a week later, in a dry classroom on MIT’s campus, a group of students discussed how coastal cities like Boston can cope with worsening floods due to rising sea levels.

“We live in a coastal city, so obviously we are being significantly impacted by sea level rise,” says Valerie Muldoon, a third-year mechanical engineering student. “We talked about the bad nor’easter earlier in January and brainstormed ways to mitigate the flooding.”

Muldoon and her fellow students were enrolled in 2.981 (New England Coastal Ecology), a class that meets during MIT’s Independent Activities Period. The course is offered through the MIT Sea Grant College Program, which is affiliated with MIT’s Department of Mechanical Engineering.

MIT Sea Grant instructors Juliet Simpson, a research engineer, and Carolina Bastidas, a research scientist, use the four-week class to introduce students to the biological makeup of coastal ecosystems, to the crucial role these areas play in protecting the environment, and to the effects human interaction and climate change have had on them.

“We want to give a taste of coastal communities in New England to the students at MIT — especially those who come from abroad or other parts of the U.S.,” says Bastidas, a marine biologist who focuses her research primarily on coral and oyster reefs.

Muldoon, who is a double minor in energy studies and environment and sustainability, says she was “so excited to see a Course 2 class on coastal ecology.”

“I’m passionate about protecting the environment, so the topic really resonated with me,” she says.

The course begins with an introduction to the different types of coastal ecosystems found in the New England area, such as rocky intertidal regions, salt marshes, eelgrass meadows, and kelp forests. In addition to providing an overview of the makeup of each environment, the course instructors also discuss the physiology of the countless organisms who live in them.

Halfway through the course, students learn about how human impacts like climate change, eutrophication, and increased development have affected coastal habitats.

“We focus on climate change as it impacts coastal communities like rocky shores and salt marshes,” says Simpson, a coastal ecologist who studies how plants and algae respond to human interference. “There are a lot of interesting implications for sea level rise for intertidal organisms.”

Sea level rise, for example, has forced organisms that live in salt marshes to migrate upland. Changes in both water and air temperature also have a drastic effect on the inhabitants of coastal regions.

“As temperatures rise, all of those organisms are going to need to adapt or the communities are going to change, possibly dramatically,” explains Simpson.

Protecting coastal ecosystems has far reaching implications that go beyond the animals and plants that live there, because they offer a natural defense against climate change. Many coastal are natural hot spots for carbon capture and sequestration. Salt marshes and seagrass meadows all capture vast amounts of carbon that can be stored for several thousand years in peat.

“I was shocked at how much carbon the plants in these ecosystems can hold through sequestration,” recalls Muldoon.

Protecting these areas is essential to continue this natural sequestration of carbon and prevent carbon already stored there from leaking out. Coastal ecosystems are also instrumental in protecting coastal cities, like Boston, from flooding due to sea level rise.

“We talk about the ecology of coastal cities and how flooding from storms and sea level rise impacts human communities,” adds Simpson.

The class culminates in a field trip to Odiorne Point State Park in New Hampshire, where students get to interact with the communities they’ve learned about. Using fundamental techniques in ecology, students collect data about the species living in the salt marsh and rocky shore nearby. 

Bastidas and Simpson will expand the class’ scope beyond New England in a new course — 2.982 (Ecology and Sustainability of Coastal Ecosystems) — which will be offered in fall 2019.

While the effects of climate change on coastal ecosystems often paint a dire picture, the instructors want students to focus on the positive.

“Rather than have students focus on the gloom and doom aspect, we want to encourage them to come up with novel solutions for dealing with climate change and carbon emissions,” adds Bastidas.

Muldoon sees a special role for mechanical engineers like herself in developing such solutions.

“I think it’s so important for mechanical engineering students to take classes like this one because we are definitely going to be needed to help mitigate the problems that come with sea level rise,” she says.



from MIT News - Oceanography and ocean engineering http://bit.ly/2Cz8ad6

miércoles, 12 de diciembre de 2018

New climate model to be built from the ground up

The following news article is adapted from a press release issued by Caltech, in partnership with the MIT School of Science, the Naval Postgraduate School, and the Jet Propulsion Laboratory.

Facing the certainty of a changing climate coupled with the uncertainty that remains in predictions of how it will change, scientists and engineers from across the country are teaming up to build a new type of climate model that is designed to provide more precise and actionable predictions. 

Leveraging recent advances in the computational and data sciences, the comprehensive effort capitalizes on vast amounts of data that are now available and on increasingly powerful computing capabilities both for processing data and for simulating the Earth system. 

The new model will be built by a consortium of researchers led by Caltech, in partnership with MIT; the Naval Postgraduate School (NPS); and the Jet Propulsion Laboratory (JPL), which Caltech manages for NASA. The consortium, dubbed the Climate Modeling Alliance (CliMA), plans to fuse Earth observations and high-resolution simulations into a model that represents important small-scale features, such as clouds and turbulence, more reliably than existing climate models. The goal is a climate model that projects future changes in critical variables such as cloud cover, rainfall, and sea ice extent more accurately — with uncertainties at least half the size of those in existing models.

"Projections with current climate models — for example, of how features such as rainfall extremes will change — still have large uncertainties, and the uncertainties are poorly quantified," says Tapio Schneider, Caltech's Theodore Y. Wu Professor of Environmental Science and Engineering, senior research scientist at JPL, and principal investigator of CliMA. "For cities planning their stormwater management infrastructure to withstand the next 100 years' worth of floods, this is a serious issue; concrete answers about the likely range of climate outcomes are key for planning."

The consortium will operate in a fast-paced, start-up-like atmosphere, and hopes to have the new model up and running within the next five years — an aggressive timeline for building a climate model essentially from scratch. 

"A fresh start gives us an opportunity to design the model from the outset to run effectively on modern and rapidly evolving computing hardware, and for the atmospheric and ocean models to be close cousins of each other, sharing the same numerical algorithms," says Frank Giraldo, professor of applied mathematics at NPS.

Current climate modeling relies on dividing up the globe into a grid and then computing what is going on in each sector of the grid, as well as how the sectors interact with each other. The accuracy of any given model depends in part on the resolution at which the model can view the Earth — that is, the size of the grid's sectors. Limitations in available computer processing power mean that those sectors generally cannot be any smaller than tens of kilometers per side. But for climate modeling, the devil is in the details — details that get missed in a too-large grid. 

For example, low-lying clouds have a significant impact on climate by reflecting sunlight, but the turbulent plumes that sustain them are so small that they fall through the cracks of existing models. Similarly, changes in Arctic sea ice have been linked to wide-ranging effects on everything from polar climate to drought in California, but it is difficult to predict how that ice will change in the future because it is sensitive to the density of cloud cover above the ice and the temperature of ocean currents below, both of which cannot be resolved by current models.

To capture the large-scale impact of these small-scale features, the team will develop high-resolution simulations that model the features in detail in selected regions of the globe. Those simulations will be nested within the larger climate model. The effect will be a model capable of "zooming in" on selected regions, providing detailed local climate information about those areas and informing the modeling of small-scale processes everywhere else.

"The ocean soaks up much of the heat and carbon accumulating in the climate system. However, just how much it takes up depends on turbulent eddies in the upper ocean, which are too small to be resolved in climate models," says Raffaele Ferrari, a Cecil and Ida Green Professor of Oceanography at MIT. "Fusing nested high-resolution simulations with newly available measurements from, for example, a fleet of thousands of autonomous floats could enable a leap in the accuracy of ocean predictions."

While existing models are often tested by checking predictions against observations, the new model will take ground-truthing a step further by using data-assimilation and machine-learning tools to "teach" the model to improve itself in real time, harnessing both Earth observations and the nested high-resolution simulations. 

"The success of computational weather forecasting demonstrates the power of using data to improve the accuracy of computer models; we aim to bring the same successes to climate prediction," says Andrew Stuart, Caltech's Bren Professor of Computing and Mathematical Sciences.

Each of the partner institutions brings a different strength and research expertise to the project. At Caltech, Schneider and Stuart will focus on creating the data-assimilation and machine-learning algorithms, as well as models for clouds, turbulence, and other atmospheric features. At MIT, Ferrari and John Marshall, also a Cecil and Ida Green Professor of Oceanography, will lead a team that will model the ocean, including its large-scale circulation and turbulent mixing. At NPS, Giraldo will lead the development of the computational core of the new atmosphere model in collaboration with Jeremy Kozdon and Lucas Wilcox. At JPL, a group of scientists will collaborate with the team at Caltech's campus to develop process models for the atmosphere, biosphere, and cryosphere.

Funding for this project is provided by the generosity of Eric and Wendy Schmidt (by recommendation of the Schmidt Futures program); Mission Control Earth, an initiative of Mountain Philanthropies; Paul G. Allen Philanthropies; Caltech trustee Charles Trimble; and the National Science Foundation.



from MIT News - Oceanography and ocean engineering https://ift.tt/2zWNNVP

lunes, 26 de noviembre de 2018

Ernst Frankel, shipping expert and professor emeritus of ocean engineering, dies at 95

Ernst G. Frankel MME ’60, SM ‘60, professor emeritus of ocean engineering who served on MIT’s faculty for 36 years, passed away on Nov. 18 at the age of 95. Frankel, who was also a former professor of management at the Sloan School of Management, was a leading expert in shipping, shipbuilding, and port management.

Born in 1923 in Beuthen, Germany, Frankel served in the Royal Navy during World War II. He also served in the Israeli navy in 1948. After the war, he pursued his bachelor’s degree in marine engineering at London University. He worked for eight years as chief engineer of Zim Navigation Company in Israel, before moving to America and enrolling in MIT to study ocean engineering.

He graduated MIT in 1960 with a master’s of science in ocean engineering and a master’s of marine mechanical engineering. In his graduate thesis, he examined the effects of surge, pitch, and heave on semisubmerged displacement vessels in regular waves. After graduating, he joined the faculty of the then-named Department of Naval Architecture and Marine Engineering. He remained on the faculty until his retirement in 1995.

Throughout his career, Frankel authored 21 books and over 700 academic papers. In 1971, he was named head of the Interdepartmental Commodity Transportation and Economic Development Laboratory, which he also helped establish.

In addition to his work at MIT, Frankel acted as an advisor to a number of governments, international organizations, and shipping companies. He was a member of the Board of Directors of Neptune Orient Lines, one of the world’s largest shipping companies, as well as an advisor to the Panama Canal Authority. He also served as a port, shipping, and aviation advisor to the World Bank, a senior advisor on ports to the secretary general of the International Maritime Organization, and a member of the U.N.-sponsored World Maritime University’s Visiting Committee.

Frankel received a number of accolades throughout his career including a Gold Medal from the government of Great Britain in 1956. He was also a member of the Society of Naval Architects and Marine Engineers and the Transportation Science Section Council.

In the 1970s and 1980s, Frankel expanded his expertise beyond ocean engineering and naval architecture, setting his sights on business and economics. He earned a master’s of business administration in operations management and a doctor of business administration in systems management from Boston University. He also received a PhD in transport economics from the University of Wales in 1985. 

This foundation in economics and business management led to a dual appointment in the Sloan School of Management. In addition to acting as professor of ocean engineering, in the early 1990s Frankel was named a professor of management at Sloan. 

After his retirement in 1995, Frankel remained active in both teaching and research. When Elon Musk announced the Hyperloop concept in 2013, Frankel received some unexpected media attention for research he conducted two decades prior. In the early 1990s, Frankel led a team that designed a vacuum tube which could possibly enable travel between Boston and New York City in 40 minutes — a concept similar to what Musk has been hoping to achieve.

In an interview with the BBC in 2014, Frankel said, “The advantage of a vacuum tube is that you can achieve high speeds. … We built a half-mile long tube at the playing fields of MIT, evacuated it, and then shot things through it in order to measure what sort of velocities we could obtain.”

Funeral services were held in Brookline, Massachusetts, on Nov. 20.



from MIT News - Oceanography and ocean engineering https://ift.tt/2DYobv3

miércoles, 21 de noviembre de 2018

Building the ultimate record of the ocean

Before the advent of modern observational and modeling techniques, understanding how the ocean behaved required piecing together disparate data — often separated by decades in time — from a handful of sources around the world. In the 1980s, that started to change when technological advancements, such as satellites, floats, drifters, and chemical tracers, made continuous, mass measurements possible.

Still, the resulting new datasets often existed independently of each other, obscuring the big picture of how the ocean circulates, transfers heat, affects climate, stores carbon, and more. That's why Carl Wunsch, professor emeritus of physical oceanography in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and member of the EAPS Program in Atmospheres, Oceans and Climate (PAOC), started spearheading an endeavor to reveal that big picture nearly 20 years ago.

Following on the heels of the World Ocean Circulation Experiment (WOCE), Wunsch founded a consortium that sought to combine global ocean datasets with state-of-the-art circulation models. Only with this combination of observation and theory could scientists fully understand the physical and dynamical state of the ocean, and thus its role in climate, Wunsh wrote for the journal Oceanography in 2009. The consortium, which came to be called Estimating the Circulation and Climate of the Ocean (ECCO), was a massive undertaking, including an international network of researchers and governmental bodies to exchange and analyze billions of ocean observations taken from all corners of the globe.

“It was like building a large telescope,” Wunsh says. “That’s what ECCO has been.”

Today, ECCO is largely heralded as the foundational framework for understanding the behavior of the entire ocean for decades to come. Last month, Wunsch and his collaborators published a progress report of sorts on ECCO efforts in The Bulletin of the American Meteorological Society, where they detail the best record of ocean circulation to date: a 20-year average of ocean climate and circulation, called a climatology, that obeys the laws of fluids and includes all of the data collected on the ocean from around the world since 1992.

In the article, “A Dynamically Consistent, Multivariable Ocean Climatology,” the authors outline recent updates to ECCO and explain the deep trove of information that makes it possible, including observations from all of the altimetric satellites that have flown since 1992; temperature and salinity data from depth sensors, expendable bathythermographs, and Argo profiles; and — perhaps most fascinating — data collected via sensors on deep-diving elephant seals.

With an immense volume of data — several billion observations — Wunsch and his collaborators write that the problem soon became how to combine the massive datasets and fit them to a model that would represent a three-dimensional time-evolving ocean over decades. Fortuitously, during ECCO’s initiation, a parallel effort at MIT was underway, led by EAPS Cecil and Ida Green Professor of Oceanography John Marshall, to develop a new ocean general circulation model, called the MIT General Circulation Model, which Wunsch adapted to become the dynamical engine of ECCO.

Detailed understanding of the accuracies and precisions of this methodology, including at least some approximation to an error estimate on all scales, is “an unglamorous but essential activity,” Wunsch says.

Unglamorous as the methodology may be, the results are elegant solutions that adequately fit almost all types of ocean observations and that are, simultaneously, consistent with the model. These solutions are now being used to inform a wide range of research, ranging from ocean variability, biological cycles, coastal physics, and geodesy. Some studies have involved more immediate applications, like predicting physical flow and mixing fields, which influence the ecosystems of lobsters and cod. Others offer better resolution into big-picture issues, like ocean carbon absorption, sea level rise, climate forecasting, and paleoclimate. 

With ECCO, analyzing these problems is no longer confined to the use of single datasets, and researchers are freed from worries that basic properties such as energy conservation are violated in the analysis, says Wunsch.

“It’s a luxury to think about the long term,” he says. But, he adds, it is a scientific and social necessity and requires decades more data to go beyond 20 to 30 years.

Today the ECCO effort stands as proof that model-data combinations looking at decadal and longer time scales are possible, says Wunsch. But the consortium’s goals don’t end there.

“We want this climatology to be used for a greater variety of purposes, and we invite the use and critique of the result by the wider community,” he says. All of the data and the model are publicly available, Wunsch says, and if someone is interested, all they have to do is ask for help.

Wunsch, who is retired but still has an office in the Green Building at MIT, says gladly that his former students and group members have now taken over the ECCO effort. In fact, the article co-authors were all once Wunsch’s advisees: Associate Professor Patrick Heimbach of the University of Texas at Austin, Principal Scientist Ichiro Fukumori of the NASA Jet Propulsion Laboratory, and Rui M. Ponte of Atmospheric and Environmental Research (AER), Inc. 

Wunsch hopes that ECCO’s spread to the next generation of researchers will make it more resistant to fickle political and economic trends. Because understanding how the ocean is behaving under a changing climate — and how it is likely to change in the future — requires uninterrupted observations of the immense complexity of ocean circulation.

“There can’t be any gaps in data,” said Wunsh. “Gaps are deadly.”



from MIT News - Oceanography and ocean engineering https://ift.tt/2Qesfxc

martes, 6 de noviembre de 2018

Oceanographers produce first-ever images of entire cod shoals

For the most part, the mature Atlantic cod is a solitary creature that spends most of its time far below the ocean’s surface, grazing on bony fish, squid, crab, shrimp, and lobster — unless it’s spawning season, when the fish flock to each other by the millions, forming enormous shoals that resemble frenzied, teeming islands in the sea.

These massive spawning shoals may give clues to the health of the entire cod population — an essential indicator for tracking the species’ recovery, particularly in regions such as New England and Canada, where cod has been severely depleted by decades of overfishing.

But the ocean is a murky place, and fish are highly mobile by nature, making them difficult to map and count. Now a team of oceanographers at MIT has journeyed to Norway — one of the last remaining regions of the world where cod still thrive — and used a synoptic acoustic system to, for the first time, illuminate entire shoals of cod almost instantaneously, during the height of the spawning season.

The team, led by Nicholas Makris, professor of mechanical engineering and director of the Center for Ocean Engineering, and Olav Rune Godø of the Norwegian Institute of Marine Research, was able to image multiple cod shoals, the largest spanning 50 kilometers, or about 30 miles. From the images they produced, the researchers estimate that the average cod shoal consists of about 10 million individual fish.

They also found that when the total population of cod dropped below the average shoal size, the species remained in decline for decades.

“This average shoal size is almost like a lower bound,” Makris says. “And the sad thing is, it seems to have been crossed almost everywhere for cod.”

Makris and his colleagues have published their results today in the journal Fish and Fisheries.

Echoes in the deep

For years, researchers have attempted to image cod and herring shoals using high-frequency, hull-mounted sonar instruments, which direct narrow beams below moving research vessels. These ships traverse a patch of the sea in a lawnmower-like pattern, imaging slices of a shoal by emitting high-frequency sound waves, and measuring the time it takes for the signals to bounce off a fish and back to the ship. But this method requires a vessel to move slowly through the waters to get counts; one survey can take many weeks to complete and typically samples only a small portion of any particular expansive shoal, often completely missing shoals between survey tracks and never capturing shoal dynamics

The team made use of the Ocean Acoutic Waveguide Remote Sensing, or OAWRS system, an imaging technique developed at MIT by Makris and co-author Purnima Ratilal, which emits low-frequency sound waves that can travel over a much wider range than high-frequency sonar. The sound waves are essentially tuned to bounce off fish, in particular, off their swim bladder — a gas-filled organ that reflects sound waves — like echoes off a tiny drum. As these echoes return to the ship, researchers can aggregate them to produce an instant picture of millions of fish over vast areas.

Making passage

In February and March of 2014, Makris and a team of students and researchers headed to Norway to count cod, herring, and capelin during the height of their spawning seasons. They towed OAWRS aboard the Knorr, a U.S. Navy research vessel that is operated by the Woods Hole Oceanographic Institution and is best known as the ship aboard which researchers discovered the remnants of the Titanic.

The ship left Woods Hole and crossed the Atlantic over two weeks, during which time the crew continuously battled storms and choppy winter seas. When they finally arrived at the southern coast of Norway, they spent the next three weeks imaging herring, cod, and capelin along the entire Norwegian coast, from the town of Alesund, north to the Russian border.

“The underwater terrain was as treacherous as the land, with submerged seamounts, ridges, and fjord channels,” Makris recalls. “Billions of herring actually would hide in one of these submerged fjords near Alesund during the daytime, about 300 meters down, and come up at night to shelves about 100 meters deep. Our mission there was to instantaneously image entire shoals of them, stretching for kilometers, and sort out their behavior.”

A window through a hurricane

As they moved up the Norwegian coast, the researchers towed a 0.5-kilometer-long array of passive underwater microphones and a device that emitted low-frequency sound waves. After imaging herring shoals in southern Norway, the team moved north to Lofoten, a dramatic archipelago of sheer cliffs and mountains, depicted most famously in Edgar Allen Poe’s “Descent into the Maelstrom,” in which the poet made note of the region’s abundance of cod.

To this day, Lofoten remains a primary spawning ground for cod, and there, Makris’ team was able to produce the first-ever images of an entire cod shoal, spanning 50 kilometers.

Toward the end of their journey, the researchers planned to image one last cod region, just as a hurricane was projected to hit. The team realized there would be only two windows of relatively calm winds in which to operate their imaging equipment.

“So we went, got good data, and fled to a nearby fjord as the eye wall struck,” Makris recalls. “We ended with 30-foot seas at dawn and the Norwegian coast guard, in a strangely soothing young voice, urging us to evacuate the area.” The team was able to image a slightly smaller shoal there, spanning about 10 kilometers, before completing the expedition.

On the brink

Back on dry land, the researchers analyzed their images and estimated that an average shoal size consists of about 10 million fish. They also looked at historical tallies of cod, in Norway, New England, the North Sea and Canada, and discovered an interesting trend: Those regions — like New England  — that experienced long-lasting declines in cod stocks did so when the total cod population dropped below roughly 10 million — the same number as an average shoal. When cod dropped below this threshold, the population took decades to recover, if it did at all.

In Norway, the cod population always stayed above 10 million and was able to recover, climbing back to preindustrial levels over the years, even after significant declines in the mid-20th century. The team also imaged shoals of herring  and found a similar trend through history: When the total population dropped below the level of an average herring spawning shoal, it took decades for the fish to recover.

Makris and Godø hope that the team’s results will serve as a measuring stick of sorts, to help researchers keep track of fish stocks and recognize when a species is on the brink.

“The ocean is a dark place, you look out there and can’t see what’s going on,” Makris says. “It’s a free-for-all out there, until you start shining a light on it and seeing what’s happening. Then you can properly appreciate and understand and manage.” He adds “Even if field work is difficult, time consuming, and expensive, it is essential to confirm and inspire theories, models, and simulations.”

This research was supported, in part, by the Norwegian Institute of Marine Research, the Office of Naval Research, and the National Science Foundation.



from MIT News - Oceanography and ocean engineering https://ift.tt/2D6HFfT

miércoles, 17 de octubre de 2018

Arctic ice sets speed limit for major ocean current

The Beaufort Gyre is an enormous, 600-mile-wide pool of swirling cold, fresh water in the Arctic Ocean, just north of Alaska and Canada. In the winter, this current is covered by a thick cap of ice. Each summer, as the ice melts away, the exposed gyre gathers up sea ice and river runoff, and draws it down to create a huge reservoir of frigid fresh water, equal to the volume of all the Great Lakes combined.

Scientists at MIT have now identified a key mechanism, which they call the “ice-ocean governor,” that controls how fast the Beaufort Gyre spins and how much fresh water it stores. In a paper published today in Geophysical Research Letters, the researchers report that the Arctic’s ice cover essentially sets a speed limit on the gyre’s spin.

In the past two decades, as temperatures have risen globally, the Arctic’s summer ice has progressively shrunk in size. The team has observed that, with less ice available to control the Beaufort Gyre’s spin, the current has sped up in recent years, gathering up more sea ice and expanding in both volume and depth.

If global temperatures continue to climb, the researchers expect that the mechanism governing the gyre’s spin will diminish. With no governor to limit its speed, the researchers say the gyre will likely transition into “a new regime” and eventually spill over, like an overflowing bathtub, releasing huge volumes of cold, fresh water into the North Atlantic, which could affect the global climate and ocean circulation.

“This changing ice cover in the Arctic is changing the system which is driving the Beaufort Gyre, and changing its stability and intensity,” says Gianluca Meneghello, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “If all this fresh water is released, it will affect the circulation of the Atlantic.”

Meneghello is a co-author of the paper, along with John Marshall, the Cecil and Ida Green Professor of Oceanography, Jean-Michel Campin and Edward Doddridge of MIT, and Mary-Louise Timmermans of Yale University.

A “new Arctic ocean”

There have been a handful of times in the recorded past when the Beaufort Gyre has spilled over, beginning with the Great Salinity Anomaly in the late 1960s, when the gyre sent a surge of cold, fresh water southward. Fresh water has the potential to dampen the ocean’s overturning circulation, affecting surface temperatures and perhaps storminess and climate.

Similar events could transpire if the Arctic ice controlling the Beaufort Gyre’s spin continues to recede each year.

“If this ice-ocean governor goes away, then we will end up with basically a new Arctic ocean,” Marshall says.

“Nature has a natural governor”

The researchers began looking into the dynamics of the Beaufort Gyre several years ago. At that time, they used measurements taken by satellites between 2003 and 2014, to track the movement of the Arctic ice cover, along with the speed of the Arctic wind. They used these measurements of ice and wind speed to estimate how fast the Beaufort Gyre must be downwelling, or spinning down beneath the ice. But the number they came up with was much smaller than what they expected.

“We thought there was a coding error,” Marshall recalls. “But it turns out there was something else kicking back.” In other words, there must be some other mechanism that was limiting, or slowing down, the gyre’s spin.

The team recalculated the gyre’s speed, this time by including estimates of ocean current activity in and around the gyre, which they inferred from satellite measurements of sea surface heights. The new estimate, Meneghello says, was “much more reasonable.”

In this new paper, the researchers studied the interplay of ice, wind, and ocean currents in more depth, using a high-resolution, idealized representation of ocean circulation based on the MIT General Circulation Model, built by Marshall’s group. They used this model to simulate the seasonal activity of the Beaufort Gyre as the Arctic ice expands and recedes each year.

They found that in the spring, as the Arctic ice melts away, the gyre is exposed to the wind, which acts to whip up the ocean current, causing it to spin faster and draw down more fresh water from the Arctic’s river runoff and melting ice. In the winter, as the Arctic ice sheet expands, the ice acts as a lid, shielding the gyre from the fast-moving winds. As a result, the gyre spins against the underside of the ice and eventually slows down.

“The ice moves much slower than wind, and when the gyre reaches the velocity of the ice, at this point, there is no friction — they’re rotating together, and there’s nothing applying a stress [to speed up the gyre],” Meneghello says. “This is the mechanism that governs the gyre’s speed.”

“In mechanical systems, the governor, or limiter, kicks in when things are going too fast,” Marshall adds. “We found nature has a natural governor in the Arctic.”

The evolution of sea ice over the Beaufort Gyre: In springtime, as ice thaws and melts into the sea, the gyre is exposed to the Arctic winds. Courtesy of the researchers

“In a warming world”

Marshall and Meneghello note that, as Arctic temperatures have risen in the last two decades, and summertime ice has shrunk with each year, the speed of the Beaufort Gyre has increased. Its currents have become more variable and unpredictable, and are only slightly slowed by the return of ice in the winter.

“At some point, if this trend continues, the gyre can’t swallow all this fresh water that it’s drawing down,” Marshall says. Eventually, the levee will likely break and the gyre will burst, releasing hundreds of billions of gallons of cold, fresh water into the North Atlantic.

An increasingly unstable Beaufort Gyre could also disrupt the Arctic’s halocline — the layer of ocean water underlying the gyre’s cold freshwater, that insulates it from much deeper, warmer, and saltier water. If the halocline is somehow weakened by a more instable gyre, this could encourage warmer waters to rise up, further melting the Arctic ice.

“This is part of what we’re seeing in a warming world,” Marshall says. “We know the global mean temperatures are going up, but the Arctic tempertures are going up even more. So the Arctic is very vulnerable to climate change. And we’re going to live through a period where the governor goes away, essentially.”

This research was supported, in part, by the National Science Foundation.



from MIT News - Oceanography and ocean engineering https://ift.tt/2QVQw7F

lunes, 15 de octubre de 2018

Technique quickly identifies extreme event statistics

Seafaring vessels and offshore platforms endure a constant battery of waves and currents. Over decades of operation, these structures can, without warning, meet head-on with a rogue wave, freak storm, or some other extreme event, with potentially damaging consequences.

Now engineers at MIT have developed an algorithm that quickly pinpoints the types of extreme events that are likely to occur in a complex system, such as an ocean environment, where waves of varying magnitudes, lengths, and heights can create stress and pressure on a ship or offshore platform. The researchers can simulate the forces and stresses that extreme events — in the form of waves — may generate on a particular structure.

Compared with traditional methods, the team’s technique provides a much faster, more accurate risk assessment for systems that are likely to endure an extreme event at some point during their expected lifetime, by taking into account not only the statistical nature of the phenomenon but also the underlying dynamics.

“With our approach, you can assess, from the preliminary design phase, how a structure will behave not to one wave but to the overall collection or family of waves that can hit this structure,” says Themistoklis Sapsis, associate professor of mechanical and ocean engineering at MIT. “You can better design your structure so that you don’t have structural problems or stresses that surpass a certain limit.”

Sapsis says that the technique is not limited to ships and ocean platforms, but can be applied to any complex system that is vulnerable to extreme events. For instance, the method may be used to identify the type of storms that can generate severe flooding in a city, and where that flooding may occur. It could also be used to estimate the types of electrical overloads that could cause blackouts, and where those blackouts would occur throughout a city’s power grid.

Sapsis and Mustafa Mohamad, a former graduate student in Sapsis’ group, currently assistant research scientist at Courant Institute of Mathematical Sciences at New York University, are publishing their results this week in the Proceedings of the National Academy of Sciences.

Bypassing a shortcut

Engineers typically gauge a structure’s endurance to extreme events by using computationally intensive simulations to model a structure’s response to, for instance, a wave coming from a particular direction, with a certain height, length, and speed. These simulations are highly complex, as they model not just the wave of interest but also its interaction with the structure. By simulating the entire “wave field” as a particular wave rolls in, engineers can then estimate how a structure might be rocked and pushed by a particular wave, and what resulting forces and stresses may cause damage.

These risk assessment simulations are incredibly precise and in an ideal situation might predict how a structure would react to every single possible wave type, whether extreme or not. But such precision would require engineers to simulate millions of waves, with different parameters such as height and length scale — a process that could take months to compute. 

“That’s an insanely expensive problem,” Sapsis says. “To simulate one possible wave that can occur over 100 seconds, it takes a modern graphic processor unit, which is very fast, about 24 hours. We’re interested to understand what is the probability of an extreme event over 100 years.”

As a more practical shortcut, engineers use these simulators to run just a few scenarios, choosing to simulate several random wave types that they think might cause maximum damage. If a structural design survives these extreme, randomly generated waves, engineers assume the design will stand up against similar extreme events in the ocean.

But in choosing random waves to simulate, Sapsis says, engineers may miss other less obvious scenarios, such as combinations of medium-sized waves, or a wave with a certain slope that could develop into a damaging extreme event.

“What we have managed to do is to abandon this random sampling logic,” Sapsis says.

A fast learner

Instead of running millions of waves or even several randomly chosen waves through a computationally intensive simulation, Sapsis and Mohamad developed a machine-learning algorithm to first quickly identify the “most important” or “most informative” wave to run through such a simulation.

The algorithm is based on the idea that each wave has a certain probability of contributing to an extreme event on the structure. The probability itself has some uncertainty, or error, since it represents the effect of a complex dynamical system. Moreover, some waves are more certain to contribute to an extreme event over others.

The researchers designed the algorithm so that they can quickly feed in various types of waves and their physical properties, along with their known effects on a theoretical offshore platform. From the known waves that the researchers plug into the algorithm, it can essentially “learn” and make a rough estimate of how the platform will behave in response to any unknown wave. Through this machine-learning step, the algorithm learns how the offshore structure behaves over all possible waves. It then identifies a particular wave that maximally reduces the error of the probability for extreme events. This wave has a high probability of occuring and leads to an extreme event. In this way the algorithm goes beyond a purely statistical approach and takes into account the dynamical behavior of the system under consideration.

The researchers tested the algorithm on a theoretical scenario involving a simplified offshore platform subjected to incoming waves. The team started out by plugging four typical waves into the machine-learning algorithm, including the waves’ known effects on an offshore platform. From this, the algorithm quickly identified the dimensions of a new wave that has a high probability of occurring, and it maximally reduces the error for the probability of an extreme event.

The team then plugged this wave into a more computationally intensive, open-source simulation to model the response of a simplified offshore platform. They fed the results of this first simulation back into their algorithm to identify the next best wave to simulate, and repeated the entire process. In total, the group ran 16 simulations over several days to model a platform’s behavior under various extreme events. In comparison, the researchers carried out simulations using a more conventional method, in which they blindly simulated as many waves as possible, and were able to generate similar statistical results only after running thousands of scenarios over several months.

MIT researchers simulated the behavior of a simplified offshore platform in response to the waves that are most likely to contribute to an extreme event. Courtesy of the researchers

Sapsis says the results demonstrate that the team’s method quickly hones in on the waves that are most certain to be involved in an extreme event, and provides designers with more informed, realistic scenarios to simulate, in order to test the endurance of not just offshore platforms, but also power grids and flood-prone regions.

“This method paves the way to perform risk assessment, design, and optimization of complex systems based on extreme events statistics, which is something that has not been considered or done before without severe simplifications,” Sapsis says. “We’re now in a position where we can say, using ideas like this, you can understand and optimize your system, according to risk criteria to extreme events.”

This research was supported, in part, by the Office of Naval Research, Army Research Office, and Air Force Office of Scientific Research, and was initiated through a grant from the American Bureau of Shipping.



from MIT News - Oceanography and ocean engineering https://ift.tt/2IXvW4j

martes, 9 de octubre de 2018

Five from MIT earn Simons Foundation Postdoctoral Fellowships in Marine Microbial Ecology

Four current and former MIT-Woods Hole Oceanographic Institution Joint Program students (MIT-WHOI) and one postdoc from the Department of Civil and Environmental Engineering (CEE) have been awarded Simons Foundation Postdoctoral Fellowships in Marine Microbial Ecology, bringing the total of MIT awardees to five out of the nine fellowships granted nationally in 2018.

The Simons Foundation exists to advance the frontiers of research in mathematics and the basic sciences. Its Life Sciences division supports basic research on fundamental questions in biology, and is currently focused on origins of life, microbial oceanography, microbial ecology and evolution, and support of early career scientists. For the postdoctoral fellowships in marine microbial ecology, the foundation encourages applicants outside of strictly ocean research, seeking researchers interested in using cross-disciplinary experience, modeling, and theory development to explore the interrelationship of microorganisms and ocean processes.

“Postdoctoral fellows bring new ideas and energy to a field, so support for postdocs not only helps launch their careers but also pushes the field forward,” says Marian Carlson, director of life sciences at the foundation.

The awards are for three years and include an annual stipend and $25,000 towards research support.

B.B. Cael

MIT-WHOI Joint Program graduate student B.B. Cael — currently working with Professor Mick Follows of the Department of Earth, Atmospheric and Planetary Sciences at MIT — successfully sought Simons Foundation support for a postdoctoral fellowship to build upon his thesis research on the export of biogenic carbon out of the surface ocean and attenuation of sinking particulate matter (SPM) through the ocean’s interior.

“Phytoplankton living in the sunlit surface ocean mediate the transformation of energy, carbon, and inorganic nutrients within the global marine biosphere,” Cael explains. “In the open ocean, the fraction of SPM that is not ‘remineralized’ or degraded by microbes in the photosynthetic zone becomes sequestered well below the permanent thermocline and is effectively removed from exchange with the atmosphere for decades to millennia. This process is one of many ways in which ocean ecology plays a role in our planet’s climate.”

As a postdoc with Angelique E. White in the Department of Oceanography at the University of Hawai’i, Cael will collect measurements to develop and test plausible and mechanistic theories for SPM flux that might provide an improved understanding for climate and ocean models.

Cael holds a BA in mathematics, human biology, and philosophy, and an MS in applied mathematics, both from Brown University.

Matti Gralka

MIT CEE postdoc Matti Gralka studies microscopic interactions in complex microbial communities on chitin particles in the lab of Otto Cordero, the Doherty Assistant Professor in Ocean Utilization and assistant professor of civil and environmental engineering at MIT. He plans to use the Simons award to investigate the resistance and resilience of marine microbial communities to perturbations.

“I am a physicist broadly interested in applying quantitative experiments and models towards understanding fundamental principles about biological systems and processes,” says Gralka. “At MIT, I will study the interplay of ecology and evolution, i.e., can we predict the assembly and function of microbial communities, their adaptation and response to perturbations, without a full knowledge of all microscopic details?”

Prior to MIT, Gralka completed his PhD in physics at the University of California at Berkeley working with Professor Oskar Hallatschek to study evolutionary dynamics in microbial colonies, investigating how spatial structure affects the action of selection.

Bennett Lambert

With this award from the Simons Foundation, graduate student Bennett Lambert of CEE and the MIT-WHOI Joint Program will be pursuing his postdoctoral fellowship at the University of Washington, working with E. Virginia Armbrust on the behavior of marine microbes and the role diversity plays in survival.

Lambert’s current research in CEE Visiting Associate Professor Roman Stocker’s lab investigates the interactions of individual microbes and how those interactions scale up to affect biogeochemistry in the oceans. Traditional oceanographic techniques cannot be used to investigate the microorganisms, causing Lambert and his colleagues to engineer an in situ chemotaxis assay (ISCA). This allows the investigation of microbial behavior in their natural environment.

“To examine the interactions, I've been working to develop microfluidic techniques that can be applied in both the field and the lab. In the Armbrust Lab, I'll be continuing in the same vein and applying microfluidic techniques to study phenotypic heterogeneity in marine picoeukaryotes,” says Lambert.

Prior to MIT, Lambert completed his BS in civil and environmental engineering at the University of Alberta.

Also receiving 2018 Simons Foundation fellowships in marine microbial ecology are two alumni of the MIT-WHOI Joint Program: Emily Zakem PhD ’17 and Nicholas Hawko PhD ’17. Zakem, herself a former member of the Follows Group at MIT, will explore, “what controls the transition from aerobic to anaerobic microbial activity in the ocean,” in the laboratory of Professor Naomi Levine at the University of Southern California. Also at the University of Southern California, Hawko will be working on, “regional versus phylogenetic inheritance of iron metabolic traits in Prochlorococcus,” with Professor Seth John.

A complete list of the award recipients and their projects is available at the Simons Foundation website.



from MIT News - Oceanography and ocean engineering https://ift.tt/2ILkwAo

viernes, 28 de septiembre de 2018

Beach sand ripples can be fingerprints for ancient weather conditions

When a coastal tide rolls out, it can reveal beautiful ripples in the temporarily exposed sand. These same undulating patterns can also be seen in ancient, petrified seabeds that have been exposed in various parts of the world and preserved for millions or even billions of years.

Geologists look to ancient sand ripples for clues to the environmental conditions in which they formed. For instance, the spacing between ripples is proportional to the depth of the water and the size of the waves that molded the underlying ripples.

But sand ripples aren’t always perfectly parallel, carbon-copies of each other, and can display various kinks and sworls. Can these more subtle, seemingly random deviations or defects tell us something about the conditions in which a sandy seabed formed?

The answer, according to researchers from MIT and elsewhere, is yes. In a paper published online and appearing in the Oct. 1 issue of Geology, the team reports that some common defects found in both ancient and modern seabeds are associated with certain wave conditions. In particular, their findings suggest that ripple defects resembling hourglasses, zigzags, and tuning forks were likely shaped in periods of environmental flux — for instance, during strong storms, or significant changes in tidal flows.

“The type of defect you see in ripples could tell you about how dramatic the shifts in weather conditions were at the time,” says Taylor Perron, associate professor of geology and associate head of MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “We can use these defects as fingerprints to tell not just what the average conditions were in the past, but how things were changing.”

Ripple defects in ancient sandbeds may also influence how fluids flow through sedimentary rocks, including underground reservoirs that hold water, oil and gas, or even stored carbon dioxide, according to Perron.

In addition, he says, ripple patterns in modern sand act to roughen the seabed, slowing down ocean currents near the shore. Knowing how ripples change in response to shifting waves and tides may therefore help predict coastal erosion and flooding.

Perron’s co-authors are on the paper are former MIT graduate student Kimberly Huppert ’11, PhD ’17, former undergraduate and current postdoc Abigail Koss ’12, Paul Myrow of Colorado College, and former undergraduate Andrew Wickert ’08 of the University of Minnesota.

Wrinkles preserved

The team began looking into the significance of ripple defects several years ago, when Myrow, who at the time was spending his sabbatical at MIT, showed Perron some photos that he had taken of sedimentary rocks etched with ripples and grooves. The rocks were, in fact, ancient sandbeds that were hundreds of millions of years old.

Wave-sculpted ripples form as waves travel across the surface of a body of liquid. These waves cause water beneath the surface to circle around and around, generating oscillating flows that pick up sand grains and set them down in a process that eventually carves out troughs and grooves throughout the sandbed.

But how could such delicate patterns be preserved for millions of years? Perron says that various processes could essentially set ripples in place. For instance, if the water level suddenly dropped, it could leave a sand bed’s ripples exposed to the air, drying them out and hardening them to some extent, so that they retained their patterns even as more sediment slowly layered itself on top of them over billions of years.

Similarly, if a finer sediment like mud or silt covers a sand bed, such as after a large storm, these sediments could blanket the existing ripples. As Perron explains, this would essentially “armor them, keeping the waves from eroding the ripples before more sediment buries them.” Over time, the sediments turn into rock as they are buried deep below Earth’s surface. Later, the rock overlaying the ripples can naturally erode away, exposing the preserved ripples at the surface again.

In looking through photos of sand ripples, Perron and Myrow noticed small defects resembling tuning forks, zigzags, and hourglasses, across both ancient and modern sandbeds.

“People have noticed these defects before, but we wondered, are they just random, or do they actually contain some information?” Perron says.

Paddling through waves

The researchers set out to study the various wave conditions that generate certain ripple patterns and defects. To do this, they built an acrylic wave tank measuring 60 centimers wide, 50 centimers deep, and 7 meters long. At one end of the tank, they attached a motor-driven paddle, which swished back and forth to generate waves that traveled across the tank.

At the other end of the tank, they erected an artificial sloping “beach” covered in a polymer mesh. This setup served to minimize any wave reflections: As a wave crashed onto the artificial beach, the energy dissipated within the mesh instead of splashing back and influencing oncoming waves.

The team filled the tank with a 5-centimeter-thick bed of fine sand and enough water to reach 40 centimeters in depth. For each experiment, they set the paddle to swish back and forth at a constant distance, and recorded the sand bed as ripples formed. At a certain point, they observed that the ripples — and in particular, the spacing between the ripples — reaches a stable, consistent pattern. They recorded this spacing, along with the speed and amplitude of the paddle, and then, over 32 experimental runs, either increased or decreased the paddle’s motion, causing the ripples to morph again to either a wider or narrower spacing.

Interestingly, they found that, in the process of adjusting to a new spacing, ripples formed intermediary defects resembling zigzags, hourglasses, and tuning forks, depending on the wave conditions set by the tank’s paddle.

As the researchers shortened the paddle’s back-and-forth motion, this created shorter waves, narrower ripples, and patterns that resembled hourglasses. If the paddle’s motion was shortened even further — creating faster, shorter waves — a pattern of “secondary crests,” in which existing ripples appeared to form temporary “shadow” ripples on either side, took over. When the researchers widened the paddle’s motion, generating longer waves, the ripples formed zigzag patterns as they shifted to a wider spacing.

“If you see these types of defects in nature, we argue that the seabed was undergoing some kind of change in weather conditions, tides, or something else that affected water depth or waves, probably over the course of hours or days,” Perron says. “For instance, if you’re seeing lots of secondary crests, you can tell there was a pretty big change in the waves as opposed to a smaller change, which might give you hourglasses instead.”

The researchers observed that in all scenarios, patterns resembling tuning forks cropped up, even after ripples had reached a new, stable spacing.

“These tuning forks tend to stick around for a long time,” Perron says. “If you see these in modern or ancient rock, they suggest a seabed experienced a change, but then the conditions remained steady, and the bed had a long time to adjust.”

Going forward, Perron says geologists can use the team’s results as a blueprint to connect certain ripple defects with the water conditions that may have created them, in both the modern environment and in the ancient past.  

“We think these small defects can tell you a lot more about an ancient environment than just what the average size of the waves and water depth was,” Perron says. “They could tell you if it was an environment that had tides that were large enough to change ripples by this much, or if a place was experiencing periodic storms, even billions of years ago. And if we find ancient wave ripples on Mars, we’ll know how to read them.”

This research was supported, in part, by the National Science Foundation.



from MIT News - Oceanography and ocean engineering https://ift.tt/2zB1byV

viernes, 17 de agosto de 2018

Advancing undersea optical communications

Nearly five years ago, NASA and Lincoln Laboratory made history when the Lunar Laser Communication Demonstration (LLCD) used a pulsed laser beam to transmit data from a satellite orbiting the moon to Earth — more than 239,000 miles — at a record-breaking download speed of 622 megabits per second.

Now, researchers at Lincoln Laboratory are aiming to once again break new ground by applying the laser beam technology used in LLCD to underwater communications.

“Both our undersea effort and LLCD take advantage of very narrow laser beams to deliver the necessary energy to the partner terminal for high-rate communication,” says Stephen Conrad, a staff member in the Control and Autonomous Systems Engineering Group, who developed the pointing, acquisition, and tracking (PAT) algorithm for LLCD. “In regard to using narrow-beam technology, there is a great deal of similarity between the undersea effort and LLCD.”

However, undersea laser communication (lasercom) presents its own set of challenges. In the ocean, laser beams are hampered by significant absorption and scattering, which restrict both the distance the beam can travel and the data signaling rate. To address these problems, the Laboratory is developing narrow-beam optical communications that use a beam from one underwater vehicle pointed precisely at the receive terminal of a second underwater vehicle.

This technique contrasts with the more common undersea communication approach that sends the transmit beam over a wide angle but reduces the achievable range and data rate. “By demonstrating that we can successfully acquire and track narrow optical beams between two mobile vehicles, we have taken an important step toward proving the feasibility of the laboratory’s approach to achieving undersea communication that is 10,000 times more efficient than other modern approaches,” says Scott Hamilton, leader of the Optical Communications Technology Group, which is directing this R&D into undersea communication.

Most above-ground autonomous systems rely on the use of GPS for positioning and timing data; however, because GPS signals do not penetrate the surface of water, submerged vehicles must find other ways to obtain these important data. “Underwater vehicles rely on large, costly inertial navigation systems, which combine accelerometer, gyroscope, and compass data, as well as other data streams when available, to calculate position,” says Thomas Howe of the research team. “The position calculation is noise sensitive and can quickly accumulate errors of hundreds of meters when a vehicle is submerged for significant periods of time.”

This positional uncertainty can make it difficult for an undersea terminal to locate and establish a link with incoming narrow optical beams. For this reason, "We implemented an acquisition scanning function that is used to quickly translate the beam over the uncertain region so that the companion terminal is able to detect the beam and actively lock on to keep it centered on the lasercom terminal’s acquisition and communications detector," researcher Nicolas Hardy explains. Using this methodology, two vehicles can locate, track, and effectively establish a link, despite the independent movement of each vehicle underwater.

Once the two lasercom terminals have locked onto each other and are communicating, the relative position between the two vehicles can be determined very precisely by using wide bandwidth signaling features in the communications waveform. With this method, the relative bearing and range between vehicles can be known precisely, to within a few centimeters, explains Howe, who worked on the undersea vehicles’ controls.

To test their underwater optical communications capability, six members of the team recently completed a demonstration of precision beam pointing and fast acquisition between two moving vehicles in the Boston Sports Club pool in Lexington, Massachusetts. Their tests proved that two underwater vehicles could search for and locate each other in the pool within one second. Once linked, the vehicles could potentially use their established link to transmit hundreds of gigabytes of data in one session.

This summer, the team is traveling to regional field sites to demonstrate this new optical communications capability to U.S. Navy stakeholders. One demonstration will involve underwater communications between two vehicles in an ocean environment — similar to prior testing that the Laboratory undertook at the Naval Undersea Warfare Center in Newport, Rhode Island, in 2016. The team is planning a second exercise to demonstrate communications from above the surface of the water to an underwater vehicle — a proposition that has previously proven to be nearly impossible.

The undersea communication effort could tap into innovative work conducted by other groups at the laboratory. For example, integrated blue-green optoelectronic technologies, including gallium nitride laser arrays and silicon Geiger-mode avalanche photodiode array technologies, could lead to lower size, weight, and power terminal implementation and enhanced communication functionality.

In addition, the ability to move data at megabit-to gigabit-per-second transfer rates over distances that vary from tens of meters in turbid waters to hundreds of meters in clear ocean waters will enable undersea system applications that the laboratory is exploring.

Howe, who has done a significant amount of work with underwater vehicles, both before and after coming to the laboratory, says the team’s work could transform undersea communications and operations. “High-rate, reliable communications could completely change underwater vehicle operations and take a lot of the uncertainty and stress out of the current operation methods."



from MIT News - Oceanography and ocean engineering https://ift.tt/2OHxjWo

martes, 10 de julio de 2018

Collaboration to expand the study of microbial oceanography

Microbes sustain all of Earth’s habitats, including its largest biome, the global ocean. Microbes in the sea capture solar energy, catalyze biogeochemical transformations of important elements, produce and consume greenhouse gases, and fuel the marine food web. Measuring and modeling the distribution, composition, and function of microbial communities, and their interactions with the environment, are key to understanding these fundamental processes in the ocean.

The Simons Foundation, which provides generous funding for several lines of research within MIT's Department of Earth, Atmospheric and Planetary Sciences, recently extended its support for microbial oceanography with the establishment of the Simons Foundation Collaboration on Ocean Computational Biogeochemical Modeling of Marine Ecosystems (CBIOMES). Led by MIT professor of oceanography Michael Follows, CBIOMES draws together an multidisciplinary group of both U.S. and international investigators bridging oceanography, statistics, data science, ecology, biogeochemistry, and remote sensing.

The goal of CBIOMES (pronounced “sea biomes”), which leverages and extends Follow’s Darwin Project activity, is to develop and apply quantitative models of the structure and function of marine microbial communities at seasonal and basin scales.

As Follows explains, “Microbial communities in the sea mediate the global cycles of elements including climatically significant carbon, sulfur and nitrogen. Photosynthetic microbes in the surface ocean fix these elements into organic molecules, fueling food webs that sustain fisheries and most other life in the ocean. Sinking and subducted organic matter is remineralized and respired in the dark, sub-surface ocean, maintaining a store of carbon about three times the size of the atmospheric inventory of CO2.”

The communities of microbes that sustain these global-scale cycles are functionally and genetically extremely diverse, non-uniformly distributed and sparsely sampled. Their biogeography reflects selection according to the relative fitness of myriad combinations of traits that govern interactions with the environment and other organisms. Trait-based theory and simulations provide tools with which to interpret biogeography and microbial mediation of biogeochemical cycles. Follows says, “Several outstanding challenges remain: Observations to constrain the biogeography of marine microbes are still sparse and based on eclectic sampling methods. Theories of the organization of the system have not been quantitatively tested, and the models used to simulate the system still lack sufficiently mechanistic biological foundations. Addressing these issues will enable meaningful, dynamic simulations and state estimation.”

CBIOMES seeks to integrate key new data sets in real-time as they are collected at sea to facilitate direct tests of theoretical predictions to synthesize an atlas of marine microbial biogeography suitable for testing a range of specific ecological theories and quantifying the skill of numerical simulations. It also aspires to develop new trait-based models and simulations of regional and global microbial communities bringing to bear the power of metabolic constraints and knowledge of macro-molecular composition; to analyze these data and models using statistical tools to interpolate and extrapolate the sparse data sets, formally quantify the skill of numerical simulations, and employ data assimilation technologies to identify and optimize compatible model frameworks. “Together, the results of these efforts will advance new theoretical approaches and lead to improved global ocean-scale predictions and regional state-estimates, constrained by observed biogeography. They will provide a quantification of the associated biogeochemical fluxes,” says Follows.

Working with Follows on CBIOMES are principal investigators Stephanie Dutkiewicz of MIT; Jacob BienChristopher Edwards, and Jed Fuhrman of the University of Southern California; Zoe Finkel and Andrew Irwin of Mount Allison University in Canada; Shubha Sathyendranath of Plymouth Marine Laboratory in the U.K., and Joseph Vallino of the Woods Hole Oceanographic Institute.

A meeting held at the Simons Foundation in New York City May 21 through 23 provided a first opportunity for collaborators to meet face-to-face, and provided a forum for investigators to educate one another about each others expertise and areas of activity, share initial progress, and coordinate collaborative efforts.

Discussions centered around how to determine the biogeography of marine microbes from empirical date, the role of statistical models in determining the relationships in space and time between organisms, traits, and environments, the complimentary role of mechanistic models and how to simulate the systems that are observed, and, in the context of model-date synthesis, how to best utilize empirical data to test theory and improve simulation skill.

“While the central question 'What is the functional biogeography of a group of organisms in the oceans?' is relatively focused, the techniques being used are extremely varied focusing a lot on computational tools, but uniquely, hand-in-hand with data collection and data compilation,” says Follows. “I am particularly excited by everyone’s enthusiasm, the number of cross-connections and collaborations already underway, and the rapid progress that is happening on many fronts.”

Complementary to CBIOMES is the Simons Collaboration on Ocean Processes and Ecology (SCOPE) co-led by Ed DeLong of the MIT Department of Civil and Environmental Engineering and David Karl of the University of Hawaii. SCOPE’s focus is advancing understanding of marine biology, biogeochemistry, ecology and evolution of microbial processes by focusing on a representative ocean benchmark, Station ALOHA, located in the North Pacific Subtropical Gyre.

SCOPE-Gradients, a related project, with a focus on understanding transitions between the North Pacific Subtropical Gyre and neighboring ecosystems, brings a rich stream of observational data to the CBIOMES effort. The North Pacific Subpolar Gyre is a region of open ocean notable for exhibiting steep changes in environmental conditions (gradients) associated with dramatic changes in the microbial ecosystem. Several members of the SCOPE-Gradients team accompanied project principal investigator Virginia Armbrust of the University of Washington to the May CBIOMES meeting, where they shared a preliminary implementation of the dynamic atlas they are constructing to curate disparate observational data and model results within a common framework.

The mission of the Simons Foundation is to advance the frontiers of research in mathematics and the basic sciences. Co-founded in New York City by Jim and Marilyn Simons, the foundation exists to support basic — or discovery-driven — scientific research undertaken in the pursuit of understanding the phenomena of our world.

As well as Michael Follows, other Simons Foundation funded investigators in the MIT Department of Earth, Atmospheric and Planetary Sciences include Tanja BosakGregory Fournier, and Roger Summons. Several MIT postdocs have been recipients of Simons Postdoctoral Fellowships, among them Alexandria JohnsonSukrit Ranjan and Christopher Follett.



from MIT News - Oceanography and ocean engineering https://ift.tt/2u9IEqx

viernes, 6 de julio de 2018

Ties with MIT run deep for the US Navy’s top officer

Looking back on his MIT graduate student days in the late 1980s, Admiral John M. Richardson SM ’89, EE ’89, ENG ’89 recalls a quieter time. He was not yet helming the world’s most powerful navy nor was global competition at sea nearly so high.

Richardson is now the chief of naval operations (CNO), the senior four-star admiral leading the U.S. Navy. This position places him on the Joint Chiefs of Staff as adviser to the secretary of defense and the president. He draws on his deep ties to academe to help the Navy keep pace.

From his graduate student days to today what has remained unchanged are the depth of his attachment to MIT and the warmth and respect between Richardson and his mentors in the MIT-Woods Hole Oceanographic Institution Joint Program.

“As a graduate student, John clearly stood out as brilliant, a leader, and wonderfully warm and friendly,” says Alan Oppenheim, an MIT Ford Professor of Engineering.

After his time at MIT and Woods Hole, Richardson went on to command the submarine USS Honolulu, a ship known in the Navy for the important missions for which it was tasked. Before that command he was posted at the White House as President Clinton’s Navy adviser. Just before being selected to be the CNO he was in charge of all of the nuclear reactor technology in the Navy.

“It is so striking that through his ascendancy in the Navy, John never lost these professional and personal qualities. He is as approachable today as he was back then,” Oppenheim says.

The power of relationships

Richardson recently took time from his schedule to articulate the significance of MIT in his life and career. He says friendships that began during graduate work quickly expanded to bluefish barbeques, bike riding, wind surfing, and listening to jazz and country music together, and many other things that “we still share even 30 years later.”

He speaks with affection of strong relationships with academics such as Oppenheim and Arthur Baggeroer, an MIT professor of mechanical, ocean, and electrical engineering and a Ford Professor of Engineering, Emeritus. “What I value most about my time at MIT are the enduring relationships with amazing people. Al, Art, and so many others have enriched my life so much — they are my mentors, my senseis.”

Richardson insists other alumni have made what he describes as “far more important contributions to the field of engineering.” And says for his part, he’s been able to apply his time at MIT to leading the Navy.

“In the end, it’s all about making our sailors the best in the world,” Richardson says. “The Navy that I'm so privileged to lead has always used world-leading technology, brought to life by our partnership with academe. MIT has always been a bright star in that constellation of innovation and excellence.” 

More like a family reunion

Richardson recalls a fall 2017 symposium about the future of signal processing in honor of Oppenheim, a pioneer in the field. “I'll never forget the warm feelings of camaraderie that defined Al’s conference on the future of signal processing and 80th birthday celebration.” He describes himself as “super nervous” after accepting the invitation to speak because he knew “the world’s best would be there to listen.”

“All of that anxiety was instantly dispelled by the love and respect Al engenders in others, and that will always be part of his legacy. We all felt like family by our association with him and MIT,” Richardson says.

At the symposium, the admiral outlined the challenges ahead for the Navy and invited solutions. “I want to share with you my problems to provide a template for those of you all with solutions,” he said, standing in full dress uniform. “This is a continuation of a great tradition that we have between the Navy and MIT.”

The Navy faced a submarine problem in the Atlantic during World War II that MIT helped solve through a rigorous application of emerging science in operations research, he said. “Academe came to our rescue there.”

The same was true for the Battle of Britain, during which MIT-developed naval anti-aircraft technology played a pivotal role in beating back large-scale attacks by Nazi Germany. “We have a long tradition of working together.”

Among other things, MIT has a long-standing Graduate Program in Naval Construction and Marine Engineering in close cooperation with the Navy dating back to 1901. The program prepares U.S. Navy, U.S. Coast Guard, foreign naval officers, and other graduate students for careers in ships design and construction.

Challenges in the maritime

The traffic on the ocean has increased by a factor of four over the past 25 years, Richardson said to a packed room during the conference on the future of signal processing. “Just picture that curve in your mind. The amount of food we get from the sea has increased by more than a factor of 10 in the same time period.”

“The Arctic ice cap is the smallest it has been since we started taking measurements and getting smaller, and that has tremendous implications for traffic routes and access to resources,” he said.

The internet of things will include 30 billion devices connected by 2020. And 99 percent of web data rides on undersea cables on the sea floor. “It’s not about a cloud, it’s about the ocean,” said Richardson. “If cables are disturbed or disrupted, you can’t reconstitute that via satellites or anything else, you can only fight back and get about two percent.”

“Things are moving very quickly. It’s very competitive. We’ve done a lot of work to try and figure out — how should the Navy respond?” he said. Multiple analyses show a need for heightened naval capability. Yet even the most aggressive shipbuilding plan equates to reaching 350 ships in about 17 years.

In his presentation, Richardson pointed to a chart with icons representing the U.S. fleet: ships, satellites, submarines, and aircraft. Let’s redefine the axis, he said. The measure of naval capability no longer rests only on the numerical metric of physical things but also on the ability to network platforms and to manage information.

“Signal processing has a terrific and important role in helping us transcend just making more ships. We must make our ships – and our Navy – more capable as well,” said Richardson. He pointed to a new graph in which U.S. naval power rises beyond exponential curves as the fleet is deeply networked with the assistance of technologies such as artificial intelligence, human and machine teams, and quantum computing.

Drawing on academe

More recently, Richardson created Task Force Ocean, which seeks to link innovative research concepts with the needs of the U.S. Navy, especially undersea forces. The senior academic involved in these Navy efforts is Arthur Baggeroer.

“I have known every chief of naval operations over the last two decades, and John is by far the most engaged with academia,” says Baggeroer, who was the director of the MIT-WHOI program when Richardson enrolled in 1985. He also acted as academic advisor to Richardson and five additional naval officers in the program.

Over the years, Baggeroer kept up with Richardson as he rose through the ranks.

“He has been very supportive of the MIT-WHOI Joint Program and has taken steps to attract to the program younger officers with the same qualifications he had at the time,” adds Baggeroer. 

Setting a high bar

Richardson was, by all accounts, a star graduate student. His career track and leadership continue to inspire Navy students, says Tim Stanton, scientist emeritus at the Woods Hole Oceanographic Institution. He joined Oppenheim as Richardson’s thesis advisor.

“Admiral Richardson sets the gold standard for excellence and leadership in the Navy,” says Stanton. “As I advised many Navy students for the nearly 30 years after Admiral Richardson graduated, they frequently referenced his leadership as a benchmark for their career goals. Through his leadership, he not only directly impacted Navy operations, but also the next generation of leaders in the Navy.”

“I’m so grateful for the continued friendship, partnership and leadership of MIT with the Navy,” says Richardson. “MIT has had an amazing impact on me and my life. It literally changed the way I think about things.”



from MIT News - Oceanography and ocean engineering https://ift.tt/2MUTqrp

jueves, 10 de mayo de 2018

Fundamental equations guide marine robots to optimal sampling sites

Observing the world’s oceans is increasingly a mission assigned to autonomous underwater vehicles (AUVs) — marine robots that are designed to drift, drive, or glide through the ocean without any real-time input from human operators. Critical questions that AUVs can help to answer are where, when, and what to sample for the most informative data, and how to optimally reach sampling locations.

MIT engineers have now developed systems of mathematical equations that forecast the most informative data to collect for a given observing mission, and the best way to reach the sampling sites.

With their method, the researchers can predict the degree to which one variable, such as the speed of ocean currents at a certain location, reveals information about some other variable, such as temperature at some other location — a quantity called “mutual information.” If the degree of mutual information between two variables is high, an AUV can be programmed to go to certain locations to measure one variable, to gain information about the other.  

The team used their equations and an ocean model they developed, called  Multidisciplinary Simulation, Estimation, and Assimilation Systems (MSEAS), in sea experiments to successfully forecast fields of mutual information and guide actual AUVs.

“Not all data are equal,” says Arkopal Dutt, a graduate student in MIT’s Department of Mechanical Engineering. “Our criteria … allow the autonomous machines to pinpoint sensor locations and sampling times where the most informative measurements can be made.”

To determine how to safely and efficiently reach ideal sampling destinations, the researchers developed a way to help AUVs use the uncertain ocean’s activity, by forecasting out a “reachability front” — a dynamic three-dimensional region of the ocean that an AUV would be guaranteed to reach within a certain time, given the AUV’s power constraints and the ocean’s currents. The team’s method enables a vehicle to surf currents that would bring it closer to its destination, and avoid those that would throw it off track.

When the researchers compared their reachability forecasts with the routes of actual AUVs observing a region of the Arabian Sea, they found their predictions matched where the vehicles were able to navigate, over long periods of time.

Ultimately, the team’s methods should help vehicles explore the ocean in an intelligent, energy-efficient manner.

“Autonomous marine robots are our scouts, braving the rough seas to collect data for us,” says mechanical engineering graduate student Deepak Subramani. “Our math equations help the scouts reach the desired locations and reduce their energy usage by intelligently using the ocean currents.”

The researchers, led by Pierre Lermusiaux, professor of mechanical engineering and ocean science and engineering at MIT, have laid out their results in a paper soon to appear in a volume of the book series, “The Sea,” published by the Journal of Marine Research.

In addition to Dutt and Subramani, Lermusiaux’s team includes Jing Lin, Chinmay Kulkarni, Abhinav Gupta, Tapovan Lolla, Patrick Haley, Wael Hajj Ali, Chris Mirabito, and Sudip Jana, all from the Department of Mechanical Engineering.

Quest for the most informative data

To validate their approach, the researchers showed that they could successfully predict the measurements that were the most informative for a varied set of goals. For example, they forecast the observations that were optimal for testing scientific hypotheses, learning if the ocean model equations themselves are correct or not, estimating parameters of marine ecosystems, and detecting the presence of coherent structures in the ocean. They confirmed that their optimal observations were 50 to 150 percent more informative than an average observation.

To reach the optimal observing locations, AUVs must navigate through the ocean. Traditionally, planning paths for robots has been done in relatively static environments. But planning through the ocean is a different story, as strong currents and eddies can constantly change, be uncertain, and push a vehicle off its preplanned course.

The MIT team thus developed path-planning algorithms from fundamental principles with the ocean in mind. They modified an existing equation, known as the Hamilton-Jacobi equation, to determine an AUV’s reachability front, or the furthest perimeter a vehicle is guaranteed to reach in a given amount of time. The equation is based on three main variables: time, a vehicle’s specific propulsion constraints, and advection, or the transport by the dynamic ocean currents — a variable which the group predicts by using its MSEAS ocean model.

With the new system, the AUVs can map out the feasible most informative paths and adapt their sampling plans as the uncertain ocean’s currents shift over time. In a first large, open-ocean test, the team calculated probabilistic reachability fronts and the most informative paths for autonomous floats and gliders in the Indian Ocean, as part of the Northern Arabian Sea Circulation-autonomous research (NASCar) initiative of the Office of Naval Research (ONR).

Over several months, the researchers, working out of their MIT offices, provided daily reachability forecasts to the ONR team to help guide the underwater vehicles, collecting optimal observations along the way.

“It was basically not much sleeping,” Lermusiaux recalls. “The forecasts were three to seven days out, and we would assimilate data and update every day. We did quite well. On average, the gliders and floats ended up where desired and within the probabilistic areas that we predicted.”

A moment of truth pays off

Lermusiaux and his colleagues also utilized their systems to plan “time-optimal paths” — trajectories that would get an AUV to a certain location in the shortest amount of time, given the forecast ocean current conditions.

With colleagues from the MIT Lincoln Laboratory and Woods Hole Oceanographic Institution, they tested these time-optimal paths in real time by holding “races” between identical propelled AUVs, off the coast of Martha’s Vineyard. In each race, one AUV’s course was determined by the team’s time-optimal path, while another AUV followed a path with the shortest distance to the same destination.

“It was tense — who will win?” Subramani recalls. “This was the moment of truth for us, after all those years of theoretical development with math equations and proofs.”

The team’s work paid off. In every race, the AUV operating under the team’s forecast reached its destination first, performing about 15 percent faster than the competing AUV. The team’s forecast helped the winning AUV to avoid strong currents that at times acted to block the other AUV.

“It was amazing,” Kulkarni says. “Even though physically the two paths were only less than a mile apart, following our predictions gave up to a 15 percent reduction in travel times. It shows our paths are truly time optimal.”

Among other applications, Lermusiaux, as a member of MIT’s Tata Center for Technology and Design, will be applying his ocean forecasting methods to help guide observations off the coast of India, where the vehicles will be tasked with monitoring fisheries to provide a potentially low-cost management system.

“AUVs are not very fast, and their autonomy is not infinite, so you really have to take into account the currents and their uncertainties, and model things rigorously,” Lermusiaux says. “Machine intelligence for these autonomous systems comes from rigorously deriving and merging governing differential equations and principles with control theory, information theory, and machine learning.”

This research was funded, in part, by the Office of Naval Research, the MIT Lincoln Laboratory, the MIT Tata Center, and the National Science Foundation.



from MIT News - Oceanography and ocean engineering https://ift.tt/2jLHWtV

martes, 10 de abril de 2018

Understanding microbial competition for nitrogen

Nitrogen is a hot commodity in the surface ocean. Primary producers including phytoplankton and other microorganisms consume and transform it into organic molecules to build biomass, while others transform inorganic forms to access their chemical store of energy. All of these steps are part of the complex nitrogen cycle of the upper water column.

About 200 meters down, just below the ocean’s sunlit zone, resides a layer of nitrite, an intermediate compound in the nitrogen cycle. Scientists have found this robust feature, called the primary nitrite maximum, throughout the world’s oxygenated oceans. While several individual hypotheses have been put forward, none have convincingly explained this marine signature until now.

A recent Nature Communications study led by researchers in the Program in Atmospheres, Oceans and Climate (PAOC) within MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) uses theory, modeling, and observational data to investigate the ecological mechanisms producing the observed nitrite accumulation and dictating its location in the water column. Lead author Emily Zakem — a former EAPS graduate student who is now a postdoc at the University of Southern California — along with EAPS Principal Research Scientist Stephanie Dutkiewicz and Professor Mick Follows show that physiological constraints and resource competition between phytoplankton and nitrifying microorganisms in the sunlit layer can yield this ocean trait. 

Regulating the biological pump

Despite its low oceanic concentration, nitrite (NO2-) plays a key role in global carbon and nitrogen cycles. Most of the nitrogen in the ocean resides in the inorganic form of nitrate (NO3-), which primary producers and microorganisms chemically reduce it to build organic molecules. Remineralization occurs when the reverse process takes place: Phytoplankton and other heterotrophic bacteria break down these organic compounds into into ammonium (NH4+), a form of inorganic nitrogen. Ammonium then can be consumed again by primary producers, which get their energy from light. Other microorganisms called chemoautotrophs also use the ammonium both to make new biomass and as a source of energy. To do this, they extract oxygen from seawater transform it, a process called nitrification, which occurs in two steps. First, the microbes convert ammonium into nitrite and then to nitrate.

Somewhere along the line, nitrite has been accumulating at the base of the sunlit zone, which has implications for ocean biogeochemistry. “Broadly, we’re trying to understand what controls the remineralization of organic matter in the ocean. It’s that remineralization that is responsible for forming the biological pump, which is the extra storage of carbon in the ocean due to biological activity,” says Zakem. It’s this strong influence that nitrogen has on the global carbon cycle that captures Follows’ interest. “Growth of phytoplankton on nitrate is called ‘new production’ and that balances the amount that’s sinking out of the surface and controls how much carbon is stored in the ocean. Growth of phytoplankton on ammonium is called recycled production, which does not increase ocean carbon storage,” Follows says. “So we wish to understand what controls the rates of supply and relative consumption of these different nitrogen species.”

Battle for nitrogen 

The primary nitrite maximum resides between two groups of microorganisms in most of the world’s oceans. Above it in the sunlit zone are the phytoplankton, and in the primary nitrite maximum and slightly below that rest an abundance of nitrifying microbes in an area with high rates of nitrification. Researchers classify these microbes into two groups based on their preferred nitrogen source: the ammonium oxidizing organisms (AOO) and nitrite oxidizing organisms (NOO). In high latitudes like the Earth’s subpolar regions, nitrite accumulates in the surface sunlit zone as well as deeper.

Scientists have postulated that there might be two not mutually exclusive reasons for the build-up of nitrite: Nitrification by chemoautotrophic microbes, and when stressed, phytoplankton can reduce nitrate to nitrite. Since isotopic evidence does not support the latter, the group looked into the former. 

“The long-standing hypothesis was that the locations of nitrification were controlled by the inhibition of light of these [nitrifying] microorganisms, so the microorganisms that carry out this process were restricted from the surface,” Zakem says, implying that these nitrifying chemoautotrophs got sunburned. But instead of assuming that was true, the group examined the ecological interactions among these and other organisms in the surface ocean, letting the dynamics fall out naturally. To do this they collected microbial samples from the subtropical North Pacific and evaluated them for metabolism rates, efficiencies and abundances, and assessed the physiological needs and constraints of the different nitrifying microbes by reducing the biological complexity of their metabolisms down to its underlying chemistry and thus hypothesizing some of the more fundamental constrains. They used this information to inform the dynamics of the nitrifying microbes in both a one-dimensional and three-dimensional biogeochemical models.

The group found that by employing this framework, they could resolve the interactions between these nitrifying chemoautotrophs and phytoplankton and therefore simulate the accumulation of nitrite at the primary nitrite maximum in the appropriate locations. In the surface ocean when inorganic nitrogen is a limiting factor, phytoplankton and ammonium oxidizing microbes have similar abilities to acquire ammonium, but because phytoplankton need less nitrogen to grow and have a faster growth rate, they are able to outcompete the nitrifiers, excluding them from the sunlit zone. In this way, they were able to provide an ecological explanation for where nitrification happens without having to rely on light inhibition dictating the location.

Comparing the fundamental physiologies of the nitrifiers revealed that differences in metabolisms and cell size could account for the nitrite build-up. The researchers found that the second step of the nitrification process that’s carried out by the nitrite oxidizers requires more nitrogen for the same amount of biomass being created by these organisms, meaning that the ammonia oxidizers can do more with less, and that there are fewer nitrite oxidizers than the ammonia oxidizers. The nitrite oxidizing microbes also have a higher surface to volume constraint than the smaller and ubiquitous ammonium oxidizing microbes, making nitrogen uptake more difficult. “This is an alternative explanation for why nitrite should accumulate,” Zakem says. “We have two reasons that point in the same direction. We can’t distinguish which one it is, but all of the observations are consistent with either of these two or some combination of both being the control.”

The researchers were also able use a global climate model to reproduce an accumulation of nitrite in the sunlit zone of places like subpolar regions, where phytoplankton are limited by another resource other than nitrogen like light or iron. Here, nitrifiers can co-exist with phytoplankton since the phytoplankton since there’s there’s more nitrogen available to them. Additionally, the deep mixed layer in the water can draw resources away from the phytoplankton, giving the nitrifiers a better chance at survival in the surface.

“There’s this long standing hypothesis that the nitrifiers were inhibited by light and that’s why they only exist at the subsurface,” Zakem says. “We’re saying that maybe we have a more fundamental explanation: that this light inhibition does exist because we’ve observed it, but that’s a consequence of long-term exclusion from the surface.”

Thinking bigger

“This study pulled together theory, numerical simulations, and observations to tease apart and provide a simple quantitative and mechanistic description for some phenomena that were mysterious in the ocean,” Follows says. “That helps us tease apart the nitrogen cycle, which has an impact on the carbon cycle. It's also opened up the box for using these kind of tools to address other questions in the microbial oceanography.” He notes that the fact that these microbes are shunting ammonium into nitrate near the sunlit zone complicates the story of carbon storage in the ocean.

Two researchers who were not involved with the study, Karen Casciotti, associate professor in the Stanford University Department of Earth System Science, and Angela Landolfi, a scientist in the marine biogeochemical modeling department at the GEOMAR Helmholtz Centre for Ocean Research Kiel, agree. “This study is of great significance as it provides evidence of how organisms’ individual traits affect competitive interactions among microbial populations and provide a direct control on nutrients' distribution in the ocean,” says Landolfi. “In essence Zakem et al., provide a better understanding of the link between different levels of complexity from individual- to community up to environmental level, providing a mechanistic framework to predict changes in community composition and their biogeochemical impact under climatic changes,” says Landolfi.

This research was funded by the Simons Foundation’s Simons Collaboration on Ocean Processes and Ecology, the Gordon and Betty Moore Foundation, and the National Science Foundation.



from MIT News - Oceanography and ocean engineering https://ift.tt/2qnakGn