Exploration Systems Mission Directorate
NASA’s Exploration Systems Mission Directorate (ESMD) develops capabilities and supporting research and technology that will make human and robotic exploration possible. It also makes sure that astronaut explorers are safe, healthy, and can perform their work during long-duration space exploration. In the near-term, ESMD does this by developing robotic precursor missions, human transportation elements, and life-support systems.
Composite Crew Module Designs, Development, Prove Successful
In January of 2007, then-NASA Administrator Dr. Mike Griffin anticipated applications where composite structures could provide benefits to space exploration systems and chartered the NASA Engineering and Safety Center (NESC) to form a team to design, build, and test a full-scale Composite Crew Module (CCM). This effort provided NASA with the experience needed to construct large-scale composite structures using best practices design and production techniques.
To leverage skills and distribute knowledge across the Agency, NESC formed a team with representation from Ames Research Center, Dryden Flight Research Center, Glenn Research Center, Goddard Space Flight Center, Jet Propulsion Laboratory, Johnson Space Center, Kennedy Space Center, Langley Research Center, and Marshall Space Flight Center. Industry partners with varied composite construction expertise included Alcore, ATK, Bally Ribbon Mills, Collier Industries, Genesis Engineering, Janicki Industries, Lockheed Martin, Northrop Grumman, and Tayco. These companies create a wide range of composite products, including aircraft and sailboats. With a mix of skills and cultures, extensive co-locations were established in the first year and nearly 100 percent of the team members were fully dedicated to the project. This intense beginning allowed a smooth transition to a virtual team and the ability to execute a successful project from conception in just over 2 years.
One of the team’s goals was to develop a concept that took complete advantage of the strengths of composite materials. In order to keep launch vehicle mass as low as possible, a new structure design was created. Multiple concepts were brainstormed, and the final CCM became a combination of many features.
Efforts to save
to the most
The design took advantage of state-of-the-art co-bonded joints that were developed under a contract for the U.S. Air Force. Three-dimensional woven, pi-shaped, preformed joints were extensively used in place of bonded and bolted joints. Using a building block approach, many specimens with these joints were constructed and tested. The pi preform technology exhibited approximately twice the pull-off strength of traditional L-clip joints.
A unique idea included using a lobed floor with an attached backbone structure. The lobed floor allowed the composite material to be very thin and still carry the internal pressure loads. This saved approximately 50 pounds of mass. Attaching the backbone structure to the floor created a potential structural load sharing with the heat shield during landing. It was estimated that this feature would save approximately 1,000 pounds in heat shield mass.
Tradeoffs between the design, analysis, and fabrication teams were necessary. The CCM sizing, fiber layouts, and analysis were performed using analytical tools such as FEMAP/PATRAN, MSC and NX/NASTRAN, Pro-Mechanica, ANSYS, HyperSizer, LS-DYNA, Thermal Desktop/RadCAD/SINDA, and Fiber SIM. More than 15 finite element models were used to analyze the structure. Efforts to save weight led to the most complex layup arrangement that the experienced team members, some with B-2 bomber development experience, had ever seen. The overall CCM system design was performed in about 12 months, and the analysis effort is running through the entire project.
To manufacture a large-scale composite structure, large-scale tooling, skilled technicians, and extreme attention to detail are required. The tools were constructed with a new technology using chopped fibers impregnated with resin to form the tooling surface. The final dimensions were then machined onto the surface with high-precision mills. This allowed the CCM to be built with dimensional accuracies of 0.010 inch for critical areas and 0.030 inch for noncritical areas.
The overarching process for manufacturing the CCM required first laying up the composite material for the inner skin on the tool and curing it in an autoclave. Next, adhesive and an aluminum honeycomb core were applied and run through an oven cure. Finally, the outer skin was applied and cured in an autoclave.
The CCM was manufactured in two pieces: an upper half and a lower half. This two-piece construction was designed to allow more people to be able to work simultaneously on loading instrumentation and equipment into a crew module. The two were later spliced together using an out-of-autoclave process. It took approximately 5 months for 50 people to fabricate the CCM, including the time allowed for the fabrication technicians to train engineers, analysts, and even project managers to perform some of the material layups.
Testing was also a complex process since pressure and combined loads were applied simultaneously. Testing involved designing and building a self-reacting load frame, using many types of instrumentation; impacting the CCM to test for damage tolerance, using different types of non-destructive examination (NDE) techniques; and finding a test facility that could withstand the blast energy when the article was tested to hydrostatic failure.
The reaction load frame was a large steel structure that encompassed the CCM. It allowed the test to be set up with minor modifications to the test facility. The CCM onboard instrumentation included 300 traditional strain gauges, 3,572 fiber optic strain gauges, 8 full-field photogrammetry zones, 2 independent acoustic emission systems, and 2 piezoelectric active acoustic wave monitoring systems.
To perform the damage tolerance tests, critical areas were impacted and then inspected with NDE. Life cycle loads were then applied. No detrimental growth after cycling with damage was detected. NDE techniques were used for monitoring the health of the structure throughout testing, including thorough transmission ultrasonic testing, flash thermography, and visual inspections. The Combined Loads Testing System Facility at Langley was used to perform the tests. It is a large facility with a concrete walled test chamber specifically designed for testing large structures to failure.
CCM technical success criteria included specification that the Preliminary Design Review (PDR) predicted mass should envelope the as-built mass and that the pretest analytical strain predictions should be within 20 percent of the strains measured during load tests. Based on the March 2007 loads, environments, and interfaces, the predicted PDR mass was 1,441 pounds. The CCM as-built mass, including onboard instrumentation, was measured to be 1,496 pounds, within approximately 5 pounds of the PDR prediction. All planned correlation test data strains were approximately 5 percent of the predicted values, well within the 20-percent goal. By paying close attention to details in design and manufacturing, the mass and strength of large composite structures were accurately modeled and predicted.
Using people from multiple NASA centers and multiple industry partners has helped to develop a composites experience network that is rapidly growing throughout NASA. Complex design and build efforts like the CCM increases knowledge in areas such as material systems, damage tolerance testing, analysis methodologies, design and drawing approach, test approach, and non-destructive examination. The CCM team has learned valuable lessons about how to make, inspect, and repair composite structures. This effort will help NASA learn about the benefits of using composite construction as well as help to determine if composite structures will be used for future space exploration systems.
LRO and LCROSS Missions Meet Science Goals
The Lunar Reconnaissance Orbiter (LRO) and Lunar Crater Observation and Sensing Satellite (LCROSS) missions launched from Kennedy on June 18, 2009. After launch, and LRO separation, the LCROSS shepherding spacecraft and the Atlas V’s Centaur upper stage rocket executed a flyby of the Moon and entered into an elongated Earth orbit to position LCROSS for impact at the lunar South Pole.
data sets will be
used to create
maps, a global geodetic grid,
LRO completed its commissioning phase on September 15, 2009, and its observations will be used to identify potential safe landing sites for astronauts, to locate potential resources, to characterize the radiation environment, and to demonstrate new technology. The LRO spacecraft remained in low polar orbit for its 1-year exploration mission, and returned global data sets that will be used to create temperature maps, a global geodetic grid, and high-resolution color imaging of the entire lunar surface. This mission places particular emphasis on the polar regions of the Moon where continuous access to solar illumination may be possible and in the permanently shadowed regions at the poles where water exists.
On October 9, 2009, LCROSS completed its mission to confirm the presence or absence of water ice in the permanently shadowed Cabeus crater at the Moon’s South Pole. After separating from the LCROSS shepherding spacecraft, the Centaur became an impacting vehicle, creating a debris plume that rose above the lunar surface. Following 4 minutes behind, the shepherding spacecraft flew through the debris plume, collected and relayed data back to Earth before also impacting the lunar surface. This second impact ejected material from the crater’s surface that created a plume of water (ice and vapor), hydrocarbons, and hydrated materials.
LCROSS’ science goals were fully met. The Centaur impacted the targeted area in the Cabeus crater within 100 meters. The plume rose approximately 5 kilometers, and even though not visible from Earth, was clearly detected by instruments on both LCROSS and LRO.
Prototype Sensor Measures Blood and Tissue Chemistry Without Incision
The National Space Biomedical Research Institute (NSBRI) is funding new research and technology to develop a system called the Venus prototype, under the direction of Dr. Babs Soller from the University of Massachusetts Medical School in Worcester, Massachusetts. The Venus is a medical technology that will enable a sensor system to measure a person’s blood and tissue chemistry with no need for painful incisions or blood draws. The noninvasive, needle-free system uses light to measure tissue oxygen and potentiometric hydrogen ion concentration (pH, a measure of acidity or basicity).
|The Venus prototype is a noninvasive, needle-free system that uses light to measure tissue oxygen and potentiometric hydrogen ion concentration (abbreviated as pH). It consists of a sensor (shown on the thigh) and a wearable monitor (shown on the waist).|
This NASA-funded technology may eventually serve as an alternative to drawing blood, without the use of additional medical equipment, to determine a person’s metabolic rate. The Venus has the ability to measure blood, tissue chemistry, metabolic rate (oxygen consumption) and other parameters.
To take measurements, the Venus prototype is placed directly on the skin. The 4-inch by 2-inch sensor uses near-infrared light (light just beyond the visible spectrum), and while some blood in the tiny blood vessels absorbs some of the light, the rest is reflected back to the sensor. The monitor then analyzes the reflected light to determine metabolic rate, tissue oxygen, and pH.
Soller, who leads the NSBRI Smart Medical Systems and Technology team, explains how the technology can be used in space and on Earth. “The measurement of metabolic rate will let astronauts know how quickly they are using up the oxygen in their life-support backpacks. Tissue and blood chemistry measurements can also be used in medical care to assess patients with traumatic injuries and those at risk for cardiovascular collapse.”
Risk and Knowledge Management System Implements Safety, Reliability
A new system was initiated in 2006 at ESMD called the Integrated Risk and Knowledge Management (IRKM) system. The foundation of this system is continuous risk management (CRM).
CRM requires an evaluation of events coupled with proactive measures to control or mitigate risks. A novel aspect of the IRKM approach is that it uses risk records resulting from the CRM process to initiate an assessment of what information to transfer to solve a problem. It then follows up to capture the actual strategy or measures used to mitigate the risk. Risk records used in this fashion provide a cueing function similar to an aircraft sensor cueing a weapons system sensor. In the IRKM system, CRM informs knowledge management, and knowledge management becomes the enabler of CRM.
CRM identifies, analyzes, plans, tracks, controls, communicates, and documents risk through all life cycle phases of an organization’s product development. ESMD uses an enterprise risk management approach and a common framework for identifying, analyzing, communicating, and managing risks. Risks are communicated vertically through a well-defined escalation process, while horizontal integration occurs through a multi-tiered working group and board structure. This network of risk managers is also used to communicate lessons learned and best practices—referred to as a central nervous system for information flow.
The IRKM system has an important work-process-assist element called Process 2.0, or P2.0, which is in part modeled after the U.S. Army after-action review process. P2.0s are process-focused, collegial, structured reflection events. There has been huge demand for the P2.0 events, which assist teams in examining all aspects of a given process, including stakeholders, inputs, outputs, and products. P2.0 events use critical process mapping, structured brainstorming techniques, and process failure modes and effects analysis to identify and address process issues.
As an option, P2.0 users can take advantage of a Web-based collaboration tool suite. The tool provides a simple-to-use information capture capability that increases the volume and speed of idea capture and supports alternative analysis. Most important, the P2.0 method demands disciplined thinking to drive out process improvements for the team. P2.0s have been used to assist a diverse set of team processes, ranging from vibro-acoustic coupled-loads analysis to a simple integration meeting gone awry. In each case, the result has been rapid, transparent, team-authored process improvement.
ESMD risk records provide the context for knowledge-based risks—Web-based, multimedia knowledge bundles that provide users with expert advice on risk control and mitigation for specific technical risks. ESMD defines a knowledge-based risk as a risk record, with associated knowledge artifacts, to provide a storytelling narrative of how the risk was mitigated and what worked or did not work. As key risks are mitigated, particularly risks that are likely to recur across other programs in ESMD, knowledge is captured and transferred. Knowledge-based risks identify the effectiveness of mitigation activities, specifically in terms of cost, schedule, and technical performance. Instead of a collect, store, and ignore approach, knowledge-based risks form an active collection of lessons learned that are continually reused and updated.
The ESMD wiki environment enables horizontal communication, collaboration, and knowledge sharing across ESMD. More than 350 wikis provide a multi-functional toolset to assist ESMD teams. An important part of exploiting the wiki technology has been helping teams critically examine their work processes and information architecture, which is then mapped into the tool. The wiki provides teams with an easy-to-use, flexible interface to collaborate on documents, conduct discussions, manage calendars, locate information, and, most important, work more effectively.
Knowledge capture and transfer activities are designed to document project execution lessons learned and best practices using a conversation-based format. While overlapping in some respects, knowledge capture and transfer differs from P2.0 in that it focuses on project execution rather than recurrent process implementation. Knowledge capture and transfer rejects the notion of asking participants to fill out questionnaires. Rather, knowledge capture and transfer uses the most natural modality—conversation, but carefully structured and controlled conversation. Project risk records are used to guide the initial interviews. Individual issues are synopsized and aggregated, and a composite analysis is provided. Results are rapidly provided to stakeholders using a variety of communication modes, including briefings, design review checklists, peer assists, knowledge cafes (small group brainstorming), and video interviews.
The Riskapedia wiki space is intended to assist ESMD programs, projects, managers, and workers in implementing life cycle risk management practices and discipline. Riskapedia provides extensive content (tools, techniques, best practices, videos, and lessons learned) addressing the fundamental blocking and tackling skills of risk management: risk identification, risk assessment, and risk control and mitigation planning. The resource is a hard hat area that is intended to be under construction for life. The space has been populated with expert-developed content that is intended to evolve over time as users and contributing editors engage in ongoing construction of subject matter articles. Users have the opportunity to rate and discuss content, provide or author content (as a contributing editor), ask questions of experts, and use content in the performance of work and the management of risks.
ESMD risk records illuminate top engineering management and technical issues. Each case is structured to highlight key transferrable aspects of risk management. The proper application of risk management principles can help manage life cycle costs, development schedules, and technical scope, resulting in safer and more reliable systems for NASA’s future programs. Examining the critical thinking that made past programs successful could enhance the technical curiosity of engineers developing future space systems and make the programs equally robust.
+ Back to Top
Aeronautics Research Mission Directorate
NASA’s Aeronautics Research Mission Directorate (ARMD) conducts cutting-edge, fundamental research in traditional and emerging disciplines to help transform the Nation’s air transportation system and to support future air and space vehicles. Its goals are to improve airspace capacity and mobility, improve aviation safety, and improve aircraft performance while reducing noise, emissions, and fuel burn.
Ductile Superalloy Disk Coating Extends Life of Engine Components
ARMD worked with GE Aviation to develop a ductile metallic coating to protect engine components from damage. The coating is now being further developed by the Naval Air Warfare Center for potential application as corrosion protection for turbine disks. It is also being incorporated by the U.S. Air Force in the Hybrid Disk Program for high-temperature, extended-duration disk applications.
Powder metallurgy superalloy high-pressure compressor and turbine disks and seals in engines can experience time-dependent damage at exposed surfaces. Tests show fatigue life can be reduced up to 80 percent by static and cyclic exposures at disk operating temperatures even in lab air, through activation of surface crack initiation mechanisms. High-pressure turbine disks and seals in current engines can also experience hot corrosion-related pitting. The effects of hot corrosion damage in the form of pits have been found to even more severely reduce fatigue life, by up to 98 percent in accelerated corrosion test conditions.
fatigue life can
be reduced up to
Superalloy turbine blades, which are exposed to higher temperatures but reduced stresses, have been protected by metallic and ceramic coatings to prevent such forms of surface attack. The associated disks that hold all of the blades, however, are subjected to lower temperatures but much higher fatigue stresses, which have previously precluded such coating protection. Increasing temperatures and service times for advanced engines in the field were causing surface damage, and a coating was needed that could extend the life of exposed disk surfaces, yet not harm the inherent fatigue resistance of the superalloy.
A series of experimental coating alloys based on nickel with chromium and aluminum additions were screened in accelerated hot corrosion tests to ensure the coating could prevent formation of corrosion pits for long periods of time. The alloys were simultaneously applied to fatigue specimens and tested to examine the effect of the coating on fatigue resistance. A suitable alloy was found to extend corrosion life by over 200 percent, yet did not impair disk fatigue resistance. Fatigue specimens tested after extended periods of corrosion attack still did not fail from the exposed surfaces. The coating was found to be both adherent and resilient after water quenching, bending, and burner rig testing.
NASA continues to work with GE Aviation to test and model the benefits of the coating on disk life, in oxidation and hot corrosion conditions at current and higher disk temperatures.
High-Speed, Non-Immersion, Ultrasonic Scanning System Decreases Inspection Time
Early damage detection is critical for safe operation and cost-effective maintenance of aircraft structures. Traditional ultrasonic inspection can be an effective health assessment method, but current non-immersion methods can be slow and difficult to operate in the field. To implement a relatively low-cost, high-speed, and high-resolution ultrasonic scanning system that was simple and easy to operate, NASA researchers combined an improved non-immersion ultrasound method with a unique motion synchronization technique.
The system core consists of an open-loop motion control and data acquisition platform. Using commercial-off-the-shelf hardware and custom multi-threaded control software, the system acquires time-domain data signals in precise synchronization with a continuous scanning motion. The synchronization method allows a low-cost stepper motor controller to generate high-speed motion synchronized output, which triggers the internal digitizer and any external excitation equipment. This yields a flexible platform suitable for any nondestructive evaluation (NDE) method requiring motion synchronized time-domain acquisition such as eddy current, acoustic, and ultrasound. System software provides a live C-Scan display while simultaneously recording time-domain waveforms or A-Scan at every inspection point.
This non-immersion method uses a captured water column with a durable membrane to couple sound from a standard (or custom) immersion probe. The inspector can select transducer frequency, geometry, and focusing characteristics to meet the inspection requirements, as one would in an immersion inspection. The durable membrane can be adjusted to accommodate a wide range of rough or uneven surfaces, and the method does not require a water pump, vacuum, hoses, or complicated seals. The operator need only apply a light mist of soapy water to promote acoustic coupling and decrease wear. The membrane is easy to replace and the water column is easily set up with a syringe.
The current implementation can be mounted on a range of different load frames for in-situ high-resolution inspection of specimens under test. However, the approach can also be widely applied to scanning subsystems in the field or laboratory. Existing scanning systems in the NASA NDE laboratory will be refurbished using this technique. The new system can decrease defect thresholds, inspection cost and time, and increase inspection area and frequency.
Subscale Flight Testing Facility Provides Proving Ground for Flight Technologies
|The unmanned vehicles in the Airborne Subscale Transport Aircraft Research facility (left) are only about 1/20th the size of the aircraft they are built to represent. Inside the mobile operations station (right) is a flight computer and research pilot station with several displays, including a navigation display and an analog video feed from a nose-mounted camera.|
As part of a long-term strategy to improve aviation safety, engineers at NASA are looking for ways to avoid, and if necessary, mitigate, loss-of-control events in transport aircraft. Achieving this strategy requires an understanding of the aircraft’s dynamics in abnormal and upset flight conditions. This is when events such as structural damage, hydraulic failures, or ice buildup change the vehicle’s performance to an extent that traditional autopilots are unable to fly the aircraft and pilots may be faced with highly coupled controls and oscillatory or even divergent handing characteristics.
The development of assistive control technologies requires a rich set of experimental data, obtained near the boundaries of the operating envelope. This is where the Airborne Subscale Transport Aircraft Research (AirSTAR) facility comes into play. AirSTAR is both a ground facility and a set of unmanned aircraft. The ground facility is similar to a modern flight simulator, with computer-generated out-the-window views, detailed instrumentation overlays on a heads-up display, and a full set of side-stick, throttles, and rudder pedals for pilot interface.
facility are only
the size of
the transport aircraft they
The difference between AirSTAR and a flight simulator is that in AirSTAR, the vehicle is actually flying as a remotely piloted aircraft communicating with display and control computers over a high-bandwidth telemetry link. An openly distributable simulation model has also been developed for AirSTAR, which allows advanced control algorithms to be quickly evolved from proof-of-concept simulations through validation flight tests.
The unmanned vehicles in the AirSTAR facility are only about 1/20th the size of the transport aircraft they are built to represent. With a careful structural design that scales the mass distribution and density along with the geometry, these sub-scale flight vehicles provide flight-dynamic and aerodynamic responses that are faster, but otherwise identical, to their full-scale counterparts. This retains relevance to the target application, and allows experiments to be conducted that have more risk and incur larger structural loads than would be feasible on a full-scale transport.
In September 2009, the AirSTAR system reached operational status and flew from an unmanned vehicle runway at NASA’s Wallops Flight Facility. One of the experiments performed included automated surface perturbations during a slow approach to stall, which allowed for the identification of flight models at high angles of attack. The flights also included simulated damage to the elevator and degradation in aircraft stability. Fully adaptive control algorithms were implemented and tested with pilots evaluating the benefits of these controller-assisted flight modes under emergency conditions. These test results, and future high-risk modeling and controls experiments planned with this facility, provide a proving ground for flight technologies that will continue to improve aviation safety in the next generation of transport aircraft.
Estimation for Health Management Technologies Reduces Uncertainties
Aircraft engines are highly complex systems consisting of static and rotating components, along with associated subsystems, controls, and accessories. They are required to provide reliable power generation over thousands of flight cycles while being subjected to a broad range of operating conditions, including harsh temperature environments. Over repeated flight cycles, engine performance will degrade and engine malfunctions may occur.
Under the Aviation Safety Program’s Integrated Vehicle Health Management (IVHM) project, NASA researchers are developing innovative Estimation for Health Management (EHM) technologies to assist aircraft operators in managing the safety and reliability of their gas turbine engine assets. This includes the development of real-time onboard models for the in-flight estimation of engine performance parameters that can be directly utilized by EHM applications. These onboard models are designed with the ability to self-tune, or adapt, based on sensed measurements to track the performance of the actual engine. A challenge associated with developing accurate onboard models is the fact that engine components will naturally experience degradation over time, affecting an aircraft engine’s performance.
The level of engine degradation is generally described in terms of unmeasured health parameters like the efficiency and flow capacity of each major engine component. Using mathematical estimation techniques, health parameters and the level of performance degradation can be estimated, given that there are at least as many sensors as parameters to be estimated. In an aircraft engine, however, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate and adjust a subset of the model health parameters, referred to as model tuning parameters. This approach will enable the onboard model to track measured engine outputs. Model-produced estimates of unmeasured engine outputs may be inaccurate, however, due to the fact that the impact of all the health parameters will not be accurately represented within the model.
To address this challenge, NASA has developed an innovative methodology that constructs a set of model tuning parameters defined as a linear combination of all health parameters and of appropriate dimension to enable estimation. Selection of the tuning parameters is performed using an analytical method designed to minimize the estimation error in the model parameters of interest. The new methodology has been validated in simulation using an aircraft turbofan engine simulation. The results demonstrated that applying the enhanced tuning parameter selection methodology resulted in a 30-percent reduction in average estimation error compared to the conventional approach of selecting a subset of health parameters to serve as the tuning parameters.
This technology holds great potential for applications that require a real-time estimate of unmeasured engine outputs. It is not practical to sense every parameter of interest due to cost, weight, and harsh high-temperature environment constraints. Therefore, parameter estimates are often indirectly inferred through other sensed measurements. Improving the accuracy of these synthesized parameter estimates through onboard adaptive models can help reduce uncertainty, and directly improve applications such as engine diagnostics, controls, and usage-based life consumption calculations.
Southwest Airlines Identifies Anomalies Using NASA’s Data Mining Tool
Southwest Airlines currently analyzes data for about 1,600 flights each day from 305 different aircraft. It uses a third-party analysis tool to identify threshold exceedances based on flight operations manual limits and other coordinated parameter limits. A daily review and voluntary pilot reporting are how most anomalies are discovered. However, anomalies in multivariate datasets are often represented by more than just single-variable exceedances. Individual variables may be within normal ranges while the normal relationships among them may be violated.
Orca is one of several multivariate anomaly detection methods developed by researchers at Ames Research Center. Southwest Airlines experimented with Orca on 7,200 flight segments containing descents from 10,000 feet to touchdown on a single runway. Orca identified significant anomalies such as high roll and pitch events near the final approach and hard nose-over events prior to landing. These results have caused Southwest to add new events to their daily exceedance review, which is performed by the third-party analysis tool to determine threshold exceedances. NASA and Southwest plan to continue exploring the use of various NASA-developed and other data mining methods for anomaly detection on increasing amounts of flight data.
The goal of NASA’s IVHM project is to develop validated tools, technologies, and techniques for automated detection, diagnosis, and prognosis that enable mitigation of adverse events during flight. Indications of these adverse events are thought to manifest themselves within the vast amount of data produced during flight by the thousands of aircraft and associated systems and people that operate in the skies each day. Fortunately, the number of adverse events experienced is very small relative to the amount of activity occurring. Detecting the small subsets of the data that represent adverse events is the first step to determining exactly what went wrong and identifying the most probable precursors of an adverse event.
A significant area of research and development in NASA and other organizations is in model-based methods for fault detection. Such methods involve developing a model that represents normal operation of a complex engineered system such as an aircraft, and identifies sensed data that are significantly different from what the model predicts. Such significantly different data are assumed to represent abnormal operation. These methods are often effective because the models that they use represent domain expert knowledge. However, these models are typically unable to represent the full range of normal operating modes including those that represent small amounts of degradation, which are not enough to be considered adverse events. Much of this knowledge is contained within the vast amount of data representing many years of operation of many different aircraft. Data mining methods can extract this knowledge.
Data mining methods for anomaly detection assume that most of the training data supplied represent normal operations, so the methods try to find the small amount of data that is significantly different from the rest. Research and development in this area aim to find anomalies of various types in vast data repositories. Examples include strange sequences of computer system commands that may represent hackers’ activities, unusual weather events, or unusual aircraft operations.
Researchers at Ames have developed several anomaly detection methods including Orca, which is designed to quickly find distance-based anomalies. Distance-based anomaly detection methods flag data points that are furthest from most of the data, which is assumed to be normal. In Orca, the user can set a distance metric to use, such as the average distance to a point’s k-nearest neighbors for some user-chosen number k, and the maximum number, N, of anomalies to be identified. Orca then finds the N points that have the highest value for that distance metric. Orca does this efficiently by typically not requiring the comparison of all pairs of data points.
Data-driven anomaly detection methods can be used for domains other than commercial aviation. Researchers at Ames have used these methods to identify anomalies in climate data, space shuttle main engine test data, and space station operations, among others. NASA is continuing research into making novel anomaly detection methods that are fast enough to analyze very large repositories of data, with applications to the many problems of interest to NASA. They are also working on extending these methods to heterogeneous data—data that do not just include continuous measurements, but also include discrete data (such as pilot switches and system modes), and even text repositories (such as reports written by flight crews and passengers that describe problems that occurred during flight). The goal is to develop methods that automatically analyze these data to not only detect anomalies, but also to identify what happened to bring about these anomalies, as well as why it happened. This information will enable mitigation of adverse events and improve safety for the flying public.
Alternative Fuels Tested to Potentially Provide Aviation Fuel
fuels and fuel
The high cost of aviation fuel, the growing demand for air travel, and the environmental issues associated with petroleum fuels, have prompted the aviation industry and engine companies to look for alternative sources of aviation fuel. Some alternative fuels and fuel blends, including renewable biofuels, are appealing alternatives to current hydrocarbon-based fuels and show promise as aviation fuels.
To determine the effects of future alternative fuels on emissions from aircraft engines, NASA partnered with the U.S. Department of Defense, U.S. Environmental Protection Agency (EPA), and Federal Aviation Administration to conduct an experiment examining the performance and emissions of alternative fuels using the NASA Dryden Flight Research Center’s DC-8 aircraft.
In early 2009, a team of NASA engineers and 11 other research groups conducted specific field tests to assess combustion emissions, engine performance characteristics, and engine exhaust plume composition of 2 non-petroleum-based jet fuels. The Alternative Aviation Fuel Experiment (AAFEX) tests were conducted in late January through February 3 at Dryden using synthetic fuels produced with the Fischer-Tropsch (F-T) process. One was derived from natural gas and the other was produced from coal. Both had drawn attention because of their potential for the high energy necessary for commercial flight and they were both available in the quantities required for this large-scale test.
The F-T process is a chemical reaction in which a synthesis gas (a mixture of carbon monoxide and hydrogen) is converted into hydrocarbons of various forms. The process can produce synthetic petroleum for use as a lubricant or fuel, and has been around for decades. Until now, the high cost of building new plants to produce synthetic fuels has stymied interest, except in South Africa, where Sasol, an energy and chemicals company, has been producing jet fuel from coal for a number of years. The United States does not have any F-T plants, although synthetic fuel is now being produced using this process at a number of other locations around the world.
The DC-8 was utilized for the experiments because it had been used previously for an extensive series of emissions testing in the Aircraft Particle Emissions Experiment (APEX). For each of the two F-T fuels, researchers tested both 100-percent synthetic and 50-50 blends of synthetic and standard jet fuel. Almost all previous engine testing had considered only 50-50 blends, and no aircraft had been previously tested using 100-percent synthetic fuel.
Researchers found that burning F-T fuel did not appreciably affect engine performance, but did lead to aircraft and storage tanker fuel leaks for the pure F-T fuels due to seal shrinkage from exposure to the aromatic-free fuels. Small effects of synthetic fuel were found on NOx, CO, and unburned hydrocarbon emissions. The most profound effect of the synthetic fuels, however, was to reduce engine black carbon number density and mass emissions by as much as 75 percent, relative to JP-8, at lower power conditions. Particulates were reduced across the full range of engine powers but reductions were less at higher power. The F-T fuels also reduced hazardous air pollutant emissions, and the fuel’s lack of sulfur impurities suppressed formation of volatile aerosols in the test engine’s exhaust plume as it cooled and mixed with ambient air.
A limited amount of emissions testing was also conducted using the auxiliary power unit of the aircraft with one of the pure F-T fuels and standard jet fuel. Engine performance, when operating on the F-T fuel, was not appreciably affected, though particulates were substantially reduced.
In addition to collecting needed emissions data for the fuels, NASA is ensuring that the other information is available to the community by developing a database for alternative fuels. The database was created using Microsoft Access and currently has all the standard properties specified in the current ASTM D1655 (a standard specification for aviation turbine fuels) for 19 different synthetic fuels and fuel blends. The database can provide reports for selected fuels, fuel properties, or ranges of fuel properties that can be printed or exported to an Excel file for further processing. Additional fuels can be added to the database as data becomes available.
NASA is one of many organizations aggressively working to understand how non-petroleum alternatives may be used to satisfy the growing demand for less expensive, cleaner burning fuel for aviation. The AAFEX tests were funded and managed by the NASA Fundamental Aeronautics Program (FAP) of ARMD.
FAP is continuing to conduct research and is making significant progress in advancing the alternative fuels technology. The majority of this work is primarily conducted under the Subsonic Fixed Wing Project, and capitalizes on NASA’s technical expertise and relevant facilities to increase our knowledge of fuel characterization and performance. Other related work includes the measurement of thermal stability of alternative fuels and fuel blends to determine suitability for aircraft use and fundamental studies of the F-T reaction and catalyst effects on aviation fuel yield. Research is also being conducted in the area of renewable biojet fuels and includes fundamental studies on biomass for efficient jet fuel production as well as emissions testing of biojet fuels. A second alternative fuel test is planned for the NASA DC-8 using renewable biojet fuels in 2011.
Electron Beam Fabrication Revolutionizes Aircraft Designs and Spacecraft Supportability
Electron Beam Freeform Fabrication (EBF3) is a cross-cutting technology advance in layered part fabrication for producing structural metal parts. Developed by researchers at NASA’s Langley Research Center as a replacement for forgings, this manufacturing process offers significant reductions in cost and lead time. The promise of this technology extends far beyond its applicability to low-cost manufacturing and aircraft structural designs. EBF3 could provide a way for astronauts to fabricate structural spare parts and new tools aboard the International Space Station (ISS) or on the surface of the Moon or Mars.
EBF3 uses a focused electron beam in a vacuum environment to create a molten pool on a metallic substrate. EBF3 works in a vacuum chamber, where an electron beam is focused on a source of metal, which is melted and then applied as called for by a drawing—one layer at a time—on top of a programmed moving surface until the part is complete. EBF3 has two key requirements: A detailed three-dimensional drawing of the object to be created, and a material that is compatible for use with an electron beam.
The drawing is needed to divide the object into layers, with each cross-section used to guide the electron beam and source of metal in reproducing the object, building it up layer by layer. The material must be compatible with the electron beam so that it can be heated by the stream of energy and briefly turned into liquid form (aluminum is an ideal material). EBF3 can actually handle two different sources of metal—also called feed stock—at the same time, either by mixing them together in a unique alloy or embedding one material inside another. The potential use for the latter could include embedding a strand of fiber optic glass inside an aluminum part, enabling the placement of sensors in areas that were previously impossible.
This layer-additive process enables fabrication of parts directly from computer-aided design drawings. Metal can be placed only where it is needed and the material chemistry and properties can be tailored throughout a single-piece structure, leading to new design methods for integrated sensors, tailored structures, and complex, curvilinear (characterized by following a curved line) stiffeners. The parts can be designed to support loads and perform other functions such as aeroelastic tailoring or acoustic dampening.
Future lunar-based crews could use EBF3 to manufacture spare parts as needed, rather than rely on a supply of parts launched from Earth. Astronauts could possibly mine feed stock from the lunar soil, or even recycle used landing craft stages by melting them. The immediate and greatest potential for the process, however, is in the aviation industry where major structural segments of an airliner, or casings for a jet engine, could be manufactured for about $1,000 per pound less than conventional means. Environmental savings also are made possible by deploying EBF3.
To demonstrate the potential of EBF3 for on-orbit applications, an ISS EBF3 demonstration to produce on-demand enabling tools, parts, and structures, is being proposed. The theme of the proposed demonstration is “Materials Science and Supportability.” On-demand, additive manufacturing mitigates spare parts mass and volume requirements to increase payload capability on long-duration missions, and also enhances mission flexibility. EBF3 also enables repair and assembly of large structures on-orbit. The main objective is to advance the technology readiness level of on-demand manufacturing and demonstrate enabling environmentally friendly fabrication and repair technology on-orbit. Basic experiments in basic materials science, physics, and dynamics of molten metals in microgravity will help to mature the technology.
The maturation of the EBF3 technology from inception to commercialization, including the formation of a government-industry team to complete the handoff to the industrial manufacturing sector, will exemplify a significant technology spinoff to industry that continues to forge a stronger partnership between NASA and industry. It will also describe how a manufacturing process can influence future aircraft designs by providing a solution that enables multidisciplinary optimization, and continues to demonstrate the contributions to advance technology and innovation from NASA’s research and development. Finally, it will end with the transition from a materials process development activity to a tool that can change design methodologies to incorporate aeroelastic and acoustic tailoring into aircraft structures.
Although developed for aerospace applications, EBF3 advances metal manufacturing techniques and adds new capability to the rapid manufacturing and rapid prototyping industry for manufacturing complex components and customized parts. Spinoff opportunities for on-demand manufacturing of custom or prototype parts exist in industries from custom automotive applications to racing bikes, sporting goods to medical implants, oil drilling tools, and power plant hardware. A small, robust EBF3 system, such as that proposed for demonstration on the ISS, also offers a capability for conducting repairs and building replacement parts in remote locations such as Antarctic scientific bases, Navy ships, military operations, and in space.
Energy Absorbing Technology Tested for Improved Helicopter Safety
According to the National Transportation Safety Board, more than 200 people are injured in helicopter accidents in the United States each year, in part because helicopters fly in riskier conditions than most other aircraft. They fly close to the ground, not far from power lines and other obstacles, and often are used for emergencies such as search and rescue and medical evacuations.
NASA aeronautics researchers recently dropped a small helicopter at the NASA Langley Landing and Impact Research Facility from a height of 35 feet to investigate how effective an expandable honeycomb cushion called a deployable energy absorber (DEA) would be in mitigating the destructive force of a crash. The objective of the drop test with the DEA was to make helicopters safer, under circumstances of uncontrolled scenarios and environments to prevent severe or even catastrophic injuries to crew and passengers.
For the test, researchers used an MD-500 helicopter donated by the U.S. Army. The rotorcraft was equipped with instruments that collected 160 channels of data. One of the four crash test dummies was a special torso model equipped with simulated internal organs, provided by the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland. The underside of the helicopter’s crew and passenger compartment was outfitted with the DEA, which is made of Kevlar and has a unique flexible hinge design that allows the honeycomb to be packaged flat until needed. The DEA was invented and patented by Dr. Sotiris Kellas, a senior aerospace engineer in the Structural Dynamics Branch at Langley.
The MD-500 helicopter was crash-tested by suspending it about 35 feet into the air using cables, and was then swung to the ground, using pyrotechnics to remove the cables just before the helicopter hit so that it reacted like it would in a real accident. The test conditions imitated what would be a relatively severe helicopter crash. The flight path angle was about 33 degrees and the combined forward and vertical speeds were about 48 feet per second, or 33 miles per hour.
On impact, the helicopter’s skid landing gear bent outward, but the cushion attached to its belly kept the rotorcraft’s bottom from touching the ground. Data from the four crash test dummies were compared with human injury risk criteria and the results indicated a very low probability of injury for this crash test. In addition, the airframe sustained minor damage to the front right subfloor region. The crash data will be further analyzed to determine whether the DEA worked as designed. The acquired data will be used to validate the integrated computer models for predicting how the different parts of the helicopter and the occupants react in a crash, while the torso model test dummy will help to assess internal injuries to crew and passengers in a helicopter crash.
The damage was repaired and the airframe was used in a second full-scale crash test, this time without the deployable energy absorber. The test went as anticipated and was successful. With the countdown to the release, the helicopter hit the concrete. Its skid gear collapsed, the windscreen cracked open, while the occupants lurched forward violently, suffering potentially spine-crushing injuries according to internal data recorders.
As the results of the crash dynamics of the helicopter and the impact on the crew and passengers (dummies) are better understood, it will provide improved crash performance to enhance safety and minimize the severity of injuries to crew and passengers. Thus, there is also the potential for application of the DEA technology in other aircraft, including commercial aircraft, to save lives in future aircraft accidents as well
Aviation Safety Research Advances Optical Neuroimaging
Human performance issues are often cited as causal factors of aviation accidents. Even the most expert and conscientious pilots are susceptible to making errors in certain circumstances. During prolonged critical activities, a person’s performance can decrease due to fatigue or workload. Continuous monitoring of attention and performance is important for continuous safety. Intelligent cockpits of the future will interact with pilots in ways designed to reduce error-prone states and mitigate dangerous situations at the edges of human performance. One critical aspect is to develop reliable and operationally relevant metrics for the state of the operator using noninvasive, portable, safe, and inexpensive means.
To assist, recent research by NASA Aviation Safety aims to enable functional near-infrared spectroscopy (fNIRS) as a replacement or complementary technique to electroencephalography, or EEG, and other physiological measurements. The fNIRS process is a noninvasive, safe, portable, and inexpensive method for monitoring brain activity. It uses both visible and near-infrared light to sense blood flow in the cortex and quantify changes in concentration of oxygen in the blood indicating neural activity. This research could help to reduce the effects of performance decrement and improve safety by informing intelligent systems of the state of the operator during flight.
The fNIRS method uses light sources placed on the surface of the scalp as well as paired detectors that receive the light as it is returned from the scalp and outer layers of the brain. However, the hardware currently used to obtain optical signals to and from the scalp typically requires a heavy helmet. These difficulties have held the technique back from industry and academia, though fNIRS currently works well in the laboratory.
To address these obstacles, biomedical engineers at Glenn Research Center are developing next-generation headgear concepts for fNIRS. This work could also enable the use of fNIRS for monitoring during any activity that pushes the limits of human performance. Moving away from helmet-based mounting systems, results have shown the optical and ergonomic usefulness of a material that allows the development of headgear to be both comfortable and reliable. The material is a lightweight, cleanable, curable, biomedical-grade elastomer that transmits the light while increasing comfort. Since optical component-to-skin contact is required, existing fNIRS headgear can be painful, and usually requires a time-consuming dressing process. A small layer of the transmissive elastomer between the skin and the fiber tip has vastly increased the comfort and wear-time while maintaining skin contact and obtaining good signal levels.
Compared to reported wearable systems, which have been primarily used to examine prefrontal areas accessible in front of the hairline, this headgear will place nothing on the head but the interrogating fiber optics. This saves bulk for applications requiring lightweight, low-profile sensors (such as under a helmet or integrated with headphones), provides for the possibility of increased sensor population due to reduced footprint at each source and detector location, and allows compatibility for multi-modal validation testing.
A second feature of the next-generation fNIRS headgear is a comb-based shape, which mounts the optical surfaces in the wake of a comb tooth so that the hair is automatically parted at each location that will be examined. Application is in a glasses-to-headband motion while maintaining contact between the optical surface and the skin, obviating the need for a second person to address the application and coupling of each optical sensor. Integration with existing fNIRS commercial instrumentation depends only on the fiber optic connection type. This aspect of the work is in the prototype phase.
The moldable properties of the elastomer provide the potential for an elegant solution employing diffractive optical technology. A grating embedded in the elastomer has the potential to turn the light toward the scalp over a very short distance without using a glass prism. This aspect of the work is in the design phase.
Outside of aeronautic and space applications, this research also facilitates investigations of neuroscience in practical work settings, and the development of usable brain-computer interfaces for biofeedback, rehabilitation, skill acquisition and self-treatment. The field of fNIRS is growing rapidly in both research and clinical applications and commercial fNIRS systems are emerging, with brain-computer interfaces becoming a reality in the market.
+ Back to Top
Science Mission Directorate
NASA leads the Nation on great journey of discovery, seeking new knowledge and understanding of our planet Earth, our Sun and solar system, and the universe out to its farthest reaches and back to its earliest moments of existence. NASA’s Science Mission Directorate (SMD) and the Nation’s science community use space observatories to conduct scientific studies of the Earth from space, to visit and return samples from other bodies in the solar system, and to peer out into our galaxy and beyond.
Kepler Finds Five New Exoplanets
five new planets
NASA’s Kepler space telescope, designed to find Earth-size planets in the habitable zone of Sun-like stars, has discovered its first five new exoplanets, or planets beyond our solar system. Kepler’s high sensitivity to both small and large planets enabled the discovery of the exoplanets, named Kepler 4b, 5b, 6b, 7b, and 8b. Thediscoveries were announced January 4, 2010, by the members of the Kepler science team during a news briefing at the American Astronomical Society meeting in Washington, D.C.
“These observations contribute to our understanding of how planetary systems form and evolve from the gas and dust disks that give rise to both the stars and their planets,” said William Borucki of Ames Research Center. Borucki is the mission’s science principal investigator. “The discoveries also show that our science instrument is working well. Indications are that Kepler will meet all its science goals.”
Known as “hot Jupiters” because of their high masses and extreme temperatures, the new exoplanets range in size from similar to Neptune to larger than Jupiter. They have orbits ranging from 3.3 to 4.9 days. Estimated temperatures of the planets range from 2,200 to 3,000 °F, hotter than molten lava and much too hot for life as we know it. All five of the exoplanets orbit stars hotter and larger than Earth’s Sun.
“It’s gratifying to see the first Kepler discoveries rolling off the assembly line,” said Jon Morse, director of the Astrophysics Division at NASA Headquarters. “We expected Jupiter-size planets in short orbits to be the first planets Kepler could detect. It’s only a matter of time before more Kepler observations lead to smaller planets with longer period orbits, coming closer and closer to the discovery of the first Earth analog.”
Launched on March 6, 2009, from Cape Canaveral Air Force Station in Florida, the Kepler mission continuously and simultaneously observes more than 150,000 stars. Kepler’s science instrument, or photometer, already has measured hundreds of possible planet signatures that are being analyzed. While many of these signatures are likely to be something other than a planet, such as small stars orbiting larger stars, ground-based observatories have confirmed the existence of the five exoplanets.
The discoveries are based on approximately 6 weeks’ worth of data collected since science operations began on May 12, 2009. Kepler looks for the signatures of planets by measuring dips in the brightness of stars. When planets cross in front of, or transit, their stars as seen from Earth, they periodically block the starlight. The size of the planet can be derived from the size of the dip. The temperature can be estimated from the characteristics of the star it orbits and the planet’s orbital period.
Kepler will continue science operations until at least November 2012. It will search for planets as small as Earth, including those that orbit stars in a warm habitable zone where liquid water could exist on the surface of the planet. Since transits of planets in the habitable zone of solar-like stars occur about once a year and require three transits for verification, it is expected to take at least 3 years to locate and verify an Earth-size planet.
According to Borucki, Kepler’s continuous and long-duration search should greatly improve scientists’ ability to determine the distributions of planet size and orbital period in the future. “Today’s discoveries are a significant contribution to that goal,” Borucki said. “The Kepler observations will tell us whether there are many stars with planets that could harbor life, or whether we might be alone in our galaxy.”
Kepler is the 10th mission of NASA’s Discovery Program. Ames is responsible for the ground system development, mission operations, and science data analysis. NASA’s Jet Propulsion Laboratory (JPL), managed the Kepler mission development. Ball Aerospace & Technologies Corporation, of Boulder, Colorado, was responsible for developing the Kepler flight system. Ball and the Laboratory for Atmospheric and Space Physics at the University of Colorado at Boulder, are supporting mission operations. Ground observations necessary to confirm the discoveries were conducted with these ground-based telescopes: Keck I in Hawaii; Hobby-Ebberly and Harlan J. Smith in Texas; Hale and Shane in California; WIYN, MMT, and Tillinghast in Arizona; and Nordic Optical in the Canary Islands, Spain.
New Eye on the Sun Delivers Stunning First Images
study the Sun.
NASA’s recently launched Solar Dynamics Observa-tory, or SDO, is returning early images that confirm an unprecedented new capability for scientists to better understand our Sun’s dynamic processes. These solar activities affect everything on Earth. Some of the images from the spacecraft show never-before-seen detail of material streaming outward and away from sunspots. Others show extreme close-ups of activity on the Sun’s surface. The spacecraft also has made the first high-resolution measurements of solar flares in a broad range of extreme ultraviolet wavelengths.
“These initial images show a dynamic Sun that I had never seen in more than 40 years of solar research,” said Richard Fisher, director of the Heliophysics Division at NASA Headquarters. “SDO will change our understanding of the Sun and its processes, which affect our lives and society. This mission will have a huge impact on science, similar to the impact of the Hubble Space Telescope on modern astrophysics.”
Launched on February 11, 2010, SDO is the most advanced spacecraft ever designed to study the Sun. During its 5-year mission, it will examine the Sun’s magnetic field and also provide a better understanding of the role the Sun plays in Earth’s atmospheric chemistry and climate. Since launch, engineers have been conducting testing and verification of the spacecraft’s components. Now fully operational, SDO will provide images with clarity 10 times better than high-definition television and will return more comprehensive science data faster than any other solar observing spacecraft.
SDO will determine how the Sun’s magnetic field is generated, structured, and converted into violent solar events such as turbulent solar wind, solar flares and coronal mass ejections. These immense clouds of material, when directed toward Earth, can cause large magnetic storms in our planet’s magnetosphere and upper atmosphere.
SDO will provide critical data that will improve the ability to predict these space weather events. Goddard Space Flight Center built, operates, and manages the SDO spacecraft for SMD.
“I’m so proud of our brilliant work force at Goddard, which is rewriting science textbooks once again,” said Senator Barbara Mikulski, D-Maryland, chairwoman of the Appropriations Subcommittee on Commerce, Justice, Science, and Related Agencies, which funds NASA. “This time Goddard is shedding new light on our closest star, the Sun, discovering new information about powerful solar flares that affect us here on Earth by damaging communication satellites and temporarily knocking out power grids. Better data means more accurate solar storm warnings.”
Space weather has been recognized as a cause of technological problems since the invention of the telegraph in the 19th century. These events produce disturbances in electromagnetic fields on Earth that can induce extreme currents in wires, disrupting power lines and causing widespread blackouts. These solar storms can interfere with communications between ground controllers, satellites, and airplane pilots flying near Earth’s poles. Radio noise from the storm also can disrupt cell phone service.
SDO will send 1.5 terabytes of data back to Earth each day, which is equivalent to a daily download of half a million songs onto an MP3 player. The observatory carries three state-of-the-art instruments for conducting solar research.
The Helioseismic and Magnetic Imager (HMI) maps solar magnetic fields and looks beneath the Sun’s opaque surface. The experiment will decipher the physics of the Sun’s activity, taking pictures in several very narrow bands of visible light. Scientists will be able to make ultrasound images of the Sun and study active regions in a way similar to watching sand shift in a desert dune. The instrument’s principal investigator is Phil Scherrer of California’s Stanford University. HMI was built through a collaboration of Stanford University and the Lockheed Martin Solar and Astrophysics Laboratory (LMSAL) in Palo Alto, California.
The Atmospheric Imaging Assembly is a group of four telescopes designed to photograph the Sun’s surface and atmosphere. The instrument covers 10 different wavelength bands, or colors, selected to reveal key aspects of solar activity. These types of images will show details never seen before. The principal investigator is Alan Title of the LMSAL, which built the instrument.
The Extreme Ultraviolet Variability Experiment measures fluctuations in the Sun’s radiant emissions. These emissions have a direct and powerful effect on Earth’s upper atmosphere—heating it, puffing it up, and breaking apart atoms and molecules. Researchers don’t know how fast the Sun can vary at many of these wavelengths, so they expect to make discoveries about flare events. The principal investigator is Tom Woods of the Laboratory for Atmospheric and Space Physics, at the University of Colorado at Boulder, which built the instrument.
“These amazing images, which show our dynamic Sun in a new level of detail, are only the beginning of SDO’s contribution to our understanding of the Sun,” said SDO project scientist Dean Pesnell of Goddard.
SDO is the first mission of NASA’s Living with a Star Program, or LWS, and the crown jewel in a fleet of NASA missions that study our Sun and space environment. The goal of LWS is to develop the scientific understanding necessary to address those aspects of the connected Sun-Earth system that directly affect our lives and society.
Underground Ice on Mars Exposed by Impact Crater
Images taken by the Thermal Emission Imaging System camera on NASA’s Mars Odyssey orbiter and by the Context Camera on the Mars Reconnaissance Orbiter show exposed water ice from below the surface created by a recent impact that excavated a crater. The water ice is seen as bright material around the crater’s edge in an image taken in October 2008, and then nearly gone in the following image taken in January 2009. The change in appearance resulted from some of the ice sublimating away during the Martian northern-hemisphere summer, leaving behind dust that had been intermixed with the ice. The thickening layer of dust on top obscured the remaining ice.
Analysis of the observations of fresh craters exposing ice reported by Byrne et al. in a September 25, 2009, paper in the journal Science, leads the paper’s authors to calculate that if NASA’s Viking Lander 2 had been able to dig slightly deeper than the 4- to 6-inch-deep trench that it excavated in 1976, it would have hit water ice. Discovery of water ice in large quantities close to the surface and near the equator of Mars (where landing is easier), may make a profound difference for any future human exploration missions to the Red Planet.
|NASA’s Earth Resource-2 research aircraft, with the Jet Propulsion Laboratory’s advanced Airborne Visible/Infrared Imaging Spectrometer instrument aboard, flew from California to Texas on May 6, 2010, for a series of flights to map the Gulf of Mexico oil spill and coastal areas.|
Earth Sciences Division Responds to Several Emergencies in 2010
The integrated capability of the space and air systems operated by the Earth Sciences Division had a great year of science, but also responded to many unique challenges to the people and environment of the planet. In most of these cases, many systems were brought
into play as a team effort, including the oil spill in the Gulf, the volcano eruption in Iceland, and the earthquake in Haiti.
At the request of U.S. disaster response agencies, an advanced JPL-built optical sensor flying aboard a NASA research aircraft was among several NASA remote-sensing assets mobilized to help assess the spread and impact of the Deepwater Horizon BP oil spill in the Gulf of Mexico.
As part of the national response to the spill, and at the request of the National Oceanic and Atmospheric Administration (NOAA) and the U.S. Geological Survey (USGS), NASA deployed an instrumented research aircraft, the Earth Resources-2 (ER-2), to the Gulf on May 6, 2010. The ER-2, outfitted with JPL’s Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the Cirrus Digital Camera System, supplied by Ames, was sent to collect detailed images of the Gulf of Mexico and its threatened coastal wetlands.
NASA also made extra satellite observations and conducted additional data processing to assist NOAA, the U.S. Geological Survey (USGS), and the U.S. Department of Homeland Security in monitoring the spill.
“NASA has been asked to help with the first response to the spill, providing imagery and data that can detect the presence, extent, and concentration of oil,” said Michael Goodman, program manager for natural disasters in the Earth Science Division of NASA’s SMD. “We also have longer-term work we have started in the basic research of oil in the ocean and its impacts on sensitive coastal ecosystems.”
NASA pilots flew the ER-2 from Dryden Flight Research Center in California to a temporary base of operations at Johnson Space Center’s Ellington Field in Houston. Along the way, the plane collected data over the Gulf coast and the oil slick to support spill mapping and to document the condition of coastal wetlands before oil landfall. The ER-2 made a second flight on May 20, and combined NASA aircraft flew 24 missions and acquired more than 120 hours of data over the course of May through July.
The AVIRIS team, led by JPL’s Robert Green, measured how the water absorbs and reflects light in order to map the location and concentration of oil, which separates into a thin, widespread sheen and smaller, thick patches. Satellites can document the overall extent of the oil but cannot distinguish between the sheen and thick patches. While the sheen represents most of the area of the slick, the majority of the oil is concentrated in the thicker part. AVIRIS has the capability to identify the thicker parts, helping oil spill responders know where to deploy oil-skimming boats and absorbent booms. Researchers also planned to measure changes in vegetation along the coastline and assess if, where, and how oil affected marshes, swamps, bayous, and beaches that are difficult to survey on the ground. The combination of satellite and airborne imagery assisted NOAA in forecasting the trajectory of the oil and in documenting changes in the ecosystem.
From the outset of the spill on April 20, 2010, NASA provided satellite images to Federal agencies from the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on NASA’s Terra and Aqua satellites; the Japanese Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on Terra; and the Advanced Land Imager (ALI) and Hyperion instruments on NASA’s Earth Observing-1 (EO-1) satellite. All of these observations were funneled to the Hazards Data Distribution System operated by the USGS. With its very wide field of view, MODIS provided a big picture of the oil spill and its evolution roughly twice a day. The Hyperion, ALI, and ASTER instruments observed over much smaller areas in finer detail, but less often (every 2 to 5 days).
Other NASA satellite and airborne instruments collected observations of the spill to advance basic research and to explore future remote-sensing capabilities. From space, the JPL-built and managed Multi-angle Imaging Spectroradiometer instrument on Terra, JPL’s Atmospheric Infrared Sounder instrument on Aqua, and the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) on the joint NASA-France CALIPSO satellite collected data. Another NASA research aircraft, the King Air B200 from Langley Research Center also collected data over the area of the oil spill. It completed its first flight over the spill on May 10. The High Spectral Resolution Lidar onboard the plane uses pulses of laser light to locate and identify particles in the environment. Led by Chris Hostetler of Langley, the lidar provides measurements similar to those from the CALIOP instrument. Data from these space-based and airborne lidars were used to investigate the thickness of the oil spill below the surface of the water and evaluate the impacts of dispersants used to break up the oil.
“Although NASA’s primary expertise is in using remote-sensing instruments to conduct basic research on the entire Earth system, our observations can be used for societal benefit in response to natural and technological disasters like this oil spill,” said Goodman.
On May 4, 2010, the Icelandic Meteorological Office warned that Eyjafjallajökull showed no signs of ending its eruptive activity in the near future. The office reported that ash from the volcano had reached a height of 19,000 to 20,000 feet above sea level, and had spread 40 to 50 miles east-southeast of the volcano, where it impeded visibility for local residents. The office also reported that lava continued flowing down a steep hill north of the crater. The ALI on NASA’s EO-1 satellite captured images of the volcano.
In response to the disaster in Haiti on January 12, 2010, NASA added a series of flights over earthquake faults in Haiti and the Dominican Republic on the island of Hispaniola. NASA’s Uninhabited Aerial Vehicle Synthetic Aperture Radar, or UAVSAR, left Dryden on January 25, 2010, aboard a modified NASA Gulfstream III aircraft. During its trek to Central America, which ran through mid-February, the repeat pass L-band wavelength radar, developed by JPL, studied the structure of tropical forests; monitored volcanic deformation and volcano processes; and examined Mayan archaeology sites.
After the Haitian earthquake, NASA managers added additional science objectives that will allow UAVSAR’s unique observational capabilities to study geologic processes in Hispaniola. UAVSAR’s ability to provide rapid access to regions of interest, short repeat flight intervals, high resolution, and its variable viewing geometry make it a powerful tool for studying ongoing Earth processes.
+ Back to Top
Space Operations Mission Directorate
The Space Operations Mission Directorate (SOMD) provides NASA with leadership and management of the Agency’s space operations related to human exploration in and beyond low Earth orbit. Space Operations also oversees low level requirements development, policy, and programmatic oversight. Current exploration activities in low Earth orbit include the space shuttle and International Space Station (ISS) programs. The directorate is similarly responsible for Agency leadership and management of NASA space operations related to launch services, space transportation, and space communications in support of both human and robotic exploration programs. Its main challenges include: completing assembly of the ISS; utilizing, operating, and sustaining the ISS; commercial space launch acquisition; future space communications architecture; and transition from the space shuttle to future launch vehicles.
Research on Astronauts’ Muscles Gets to the Heart of the Matter
When humans venture into space for long periods, muscles tend to weaken because they do not have to work as hard without gravity. Of course, the most important muscle is the heart.
While doctors are well aware of the weakening of the heart in space, known as cardiac atrophy, a new study on the ISS seeks to find out exactly how much of the heart muscle decreases in size over a standard 6-month station tour, and how quickly it occurs.
In addition to evaluating cardiac health in space, the Integrated Cardiovascular investigation also will determine how effective the astronauts’ current exercise program is at protecting the heart from getting smaller or weaker.
“This study also will help us determine if there is a risk of abnormal heart rhythms and how significant the risk is in order to develop appropriate countermeasures,” said Dr. Deborah Harm, the international project scientist for the ISS Medical Program at Johnson Space Center.
According to Harm, many crewmembers experience a brief period of lightheadedness and a drop in blood pressure when standing still after coming home to Earth from long-duration missions. Fainting can occur when the heart cannot generate enough force to pump the necessary blood to the brain and the rest of body—either because the muscle is too small or weak, or because there is an abnormal heart rhythm.
“At this time, it is unknown if heart muscle weakening continues throughout a mission or if it levels off at some point. That’s what we want to find out,” Harm said.
Crewmembers on Expedition 20 (2009) were the first to participate. Before, during, and after flight, they measured their heart rates, heart rhythms, and blood pressure for 24 to 48 hours before and after exercise sessions. They also performed on-orbit cardiac ultrasound scans on each other before and after exercise to look at how effectively the heart fills with blood and pumps it to the rest of the body.
“MRI scans were done on crewmembers’ hearts before and after flight to measure exactly how much heart muscle was present and will be compared to the cardiac ultrasound information to better understand how changes in heart muscle are related to cardiac function,” said Dr. Michael Bungo of the investigator team.
“Such an extensive and sophisticated study of the cardiovascular system was virtually impossible before we had six crewmembers onboard the station,” Harm added. “There simply was not enough crew time available to complete all the procedures required for this experiment.”
While in space, crewmembers wore four devices: a portable Holter monitor that measures heart rate continuously for extended periods; a Cardiopres that measures blood pressure with every heart beat; and two Actiwatches—one on an ankle and one on a wrist—to monitor and record body movements.
The data collected was transmitted to the Payload Operations Center at Marshall Space Flight Center, and delivered to the investigator team for analysis.
This study shows the breadth of international cooperation and collaboration that occurs on the space station. Three international partner agencies are working together to get the best science. The European Space Agency (ESA) provided the Cardiopres device for monitoring blood pressure, and the investigators shared the Holter data with teams for two Canadian-sponsored experiments. One of these experiments also includes ESA investigators.
All of these teams are studying different aspects of the cardiovascular system. Sharing this data among scientists greatly enhances the overall science return. “This allows us to more efficiently and quickly understand the full range of cardiovascular changes than any one investigation could,” Harm said.
Knowledge gained in the Integrated Cardiovascular study may help doctors treat patients on Earth who have been confined or on bed rest for long periods. Patients with heart diseases that change their normal cardiac function may also benefit.
New Experiment Aboard ISS Smiles on the Environment
There’s a new way to look at environmental issues on Earth—from 210 miles up onboard the ISS—and investigators are smiling about the results. The SMILES experiment, more properly known as the Superconducting Submillimeter-wave Limb-emission Sounder, is investigating issues such as ozone depletion and air quality problems.
The experiment launched on the Japan Aerospace Exploration Agency’s (JAXA) H-II Transfer Vehicle—an unmanned cargo ship for station resupply. Housed on the Japanese Experiment Module’s Exposed Facility, SMILES is gathering data on trace gasses known to cause ozone depletion, such as chlorine and bromine compounds. The facility provides a multipurpose platform where science experiments can be deployed and operated in open space. The observations are taken in the stratosphere, the region of the atmosphere 6 to 30 miles above the Earth’s surface.
“Measurements of ozone and trace gasses in the stratosphere from instruments such as SMILES are important for understanding the dynamics of Earth’s atmosphere,” said Julie Robinson, ISS program scientist at Johnson.
The advantage of this experiment is the space station’s power and payload resources, which enable researchers to test out new technologies. As a result, SMILES can measure precise molecules of trace atmospheric gasses and obtain data about elements in quantities too small to be measured until now.
SMILES observations taken in October 2009 show that ozone amounts are greater around Earth’s equatorial region than at higher latitudes, illustrating the characteristics of stratospheric ozone in its global distribution.
“This is just the beginning,” said Takuki Sano, a member of the SMILES science team with JAXA. “In due course, SMILES, with its full-scale observation, will contribute to the prediction of ozone depletion through analyses of the accumulated observation data, thus clarifying the influence the stratosphere has on the troposphere—the lowest and most dense layer of the atmosphere 10 to 12 miles above the Earth’s surface.”
Alternative Energy Crops Grow in Space
What if space held the key to producing alternative energy crops on Earth? That’s what researchers are hoping to find in a new experiment on the ISS.
The experiment, National Lab Pathfinder-Cells 3, is aimed at learning whether microgravity can help Jatropha curcas plant cells grow faster to produce biofuel, or renewable fuel derived from biological matter. Jatropha is known to produce high-quality oil that can be converted into an alternative energy fuel, or biofuel.
By studying the effects of microgravity on Jatropha cells, researchers hope to accelerate the cultivation of the plant for commercial use by improving characteristics such as cell structure, growth, and development. This is the first study to assess the effects of microgravity on cells of a biofuel plant.
“As the search for alternate energy sources has become a top priority, the results from this study could add value for commercialization of a new product,” said Wagner Vendrame, principal investigator for the experiment at the University of Florida in Homestead. “Our goal is to verify if microgravity will induce any significant changes in the cells that could affect plant growth and development back on Earth.”
Launched on space shuttle Endeavour’s STS-130 mission in February 2010, cell cultures of Jatropha were sent to the space station in special flasks containing nutrients and vitamins. The cells were exposed to microgravity until they returned to Earth aboard space shuttle Discovery’s STS-131 mission.
For comparison studies of how fast the cultures grow, a replicated set of samples are being maintained at the University of Florida’s Tropical Research and Education Center in Homestead.
“Watching the space shuttle go up carrying a little piece of my work is an indescribable experience,” said Vendrame. “Knowing that my experiment could contribute to creating a sustainable means for biofuel production on Earth, and therefore making this a better world, adds special value to the work.”
ISS Expansion Includes a Room with a View
For the past several years, the ISS has been moving steadily closer to completion. But what house is complete without a utility room, a gym, and a picture window?
During the STS-130 mission, Space Shuttle Endeavour delivered the Tranquility node and its cupola, a dome-shaped extension from Tranquility made up of seven windows. They are the last major U.S. modules to be added to the space station, and together they helped clear out premium workspace in other areas of the station—as well as offer a window on the world.
At 15-feet wide and 23-feet long, the Tranquility node provides a centralized home for the station’s environmental control equipment—one of the systems that remove carbon dioxide from the station’s air, one of the station’s bathrooms and the equipment that converts urine into drinkable water, all of which previously took up space in the Destiny laboratory. And there’s enough room left over to house the station’s new treadmill and its microgravity equivalent of a weight machine, moving it out of the Unity node where it was in the way whenever spacewalk preparations were going on inside the adjacent Quest airlock.
“It [gave] us a much needed addition to the house, so to speak,” said Bob Dempsey, lead space station flight director for the mission. “[We were] getting to the point where we’re really cramped for space. You might be surprised at that, considering we’re essentially the volume of a 747 and we’ve been adding modules for the last couple of years. You might think we’d be sitting around in a big empty house. But no—every inch is really getting packed up there.”
STS-130 Commander George Zamka put it another way. “It’s like exercising in the office,” he said. “This will be a more logical organization, more focused.”
Though the node has an intensely practical function, there are still fanciful aspects to Tranquility, including its name, which was chosen with the help of a naming contest on www.nasa.gov.
“It harkens back to the Sea of Tranquility, where humans made their very first tentative landing on the Moon,” Zamka said. “They were only there for a few hours, and it was at the very limits of what human beings could do. From that beginning, we’re now putting up a node that will house the majority of the life support equipment for the station, where we’re going to have a permanent presence in space.”
But everyone agrees that the real scope for the imagination will be provided by Tranquility’s 6.5- by 5-foot annex: the cupola. Its purpose is to provide a true view of robotics operations on the station’s exterior—such as those that will be required when the next module, the Russian Rassvet, is added during STS-132.
“Out the window is the truth,” Zamka said. “The video views that we use now, you’re trying to stick together and have a mental image of where things are. When you look out the window, you don’t have to imagine. It’s all right there for you.”
But there’s no question that many people, including Zamka, are looking forward to looking out of it for other views.
“Just the idea of providing this great view of the station and the world beneath us is going to be pretty great,” he said. “That’s not what it’s for, but it will be spectacular.”
The cupola will be like a mini control tower sticking out from the Tranquility node, as opposed to the other station windows, which are flush with the station’s exterior. Its seven windows—one in the center and six around the sides—will provide the only views of the outside of the station from the inside, in particular the Russian and Japanese sections. And with the station just about finished, there’s more to see out there than ever.
So, Zamka said, in addition to the robotic operations and Earth views it will provide, it will also give us a good look at some of the space shuttle fleet’s finest handiwork as the program comes to an end. And that provides its own cause for reflection.
“We’ve come a long way in human space flight because of the shuttle’s capability,” he said. “We’ve launched and retrieved satellites, we’ve done medical research, and now we’ve built this huge space station. We’re almost to the point of passing the baton from the space shuttle to the space station in terms of what our human space flight experience will be now.”
Kwatsi Alibaruho, the lead STS-130 space shuttle flight director, said that even with so much left to do in the program’s final five flights, he was making it a point to spend some time thinking about the subject.
“It’s very easy to get into a routine, to lose oneself in the hustle and bustle of trying to get the work done,” Alibaruho said. “But the shuttle is a unique spacecraft. I find myself thinking a lot about how I’m going to describe this time to my son when he’s old enough to understand. There has never been an operational spacecraft like it before and all indications are that it will be some time before there will be one like it again. I find myself really appreciative of the opportunity I’ve had to serve in this capacity.”
Boeing Delivers ‘Keys’ to ISS
In March of 2010, NASA officially accepted the “keys” to the ISS from its prime contractor, Boeing, at the conclusion of an Acceptance Review Board (ARB) that verified the delivery, assembly, integration, and activation of all hardware and software required by contract.
“The successful completion of this ISS contract is a testament to the hard work, dedication, and perseverance of an amazing international team of government agencies and their commercial contractors,” said ISS program manager Michael Suffredini.
“I want to congratulate the entire Boeing team, including its many suppliers and subcontractors, for their service to NASA and the world,” Suffredini added. “As we near completion of this orbiting laboratory, we are only beginning to understand its true value as the dividends in our investment pay off with advances in medicine, technology, and international relations.”
The ARB was an administrative formality that culminated in submission of government form DD 250, in which Boeing confirmed, and NASA accepted, that all major contract requirements have been met. In effect, the DD 250 transfers station ownership to NASA. The ARB examined in exhaustive detail the past and current performance since the first element was launched in 1998.
The review came on the heels of the STS-130 mission of Endeavour, which delivered the Tranquility module and cupola, the final living areas of the U.S. On-orbit Segment (USOS). The USOS incorporates all contributions to the station by NASA, ESA, the Canadian Space Agency, the JAXA, and interfaces with the Russian On-Orbit Segment, which includes the components provided by the fifth partner, the Russian Federal Space Agency.
The football field-sized outpost is now 90-percent complete by mass, and 98-percent complete by internal volume. Supporting a multicultural crew of six, the station has a mass of almost 400 tons and more than 12,000 cubic feet of living space.
Upon completion of assembly, the station’s crew and its U.S., European, Japanese, and Russian laboratory facilities will expand the pace of space-based research to unprecedented levels. Nearly 150 experiments are currently underway on the station, and more than 400 experiments have been conducted since research began 9 years ago. These experiments already are leading to advances in the fight against food poisoning, new methods for delivering medicine to cancer cells, and the development of more capable engines and materials for use on Earth and in space.
Access® and Excel® are registered trademarks of Microsoft Corporation.
Kevlar® is a registered trademark of E. I. du Pont de Nemours and Company.
+ Back to Top