Data processing: measuring – calibrating – or testing – Measurement system in a specific environment – Earth science
Reexamination Certificate
2002-09-10
2004-04-13
McElheny, Jr., Donald E. (Department: 2857)
Data processing: measuring, calibrating, or testing
Measurement system in a specific environment
Earth science
Reexamination Certificate
active
06721662
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention relates to processing of seismic data representative of subsurface features in the earth and, more particularly, to improved methods of processing seismic data using improved tau-P filters to remove unwanted noise from meaningful reflection signals.
Seismic surveys are one of the most important techniques for discovering the presence of oil and gas deposits. If the data is properly processed and interpreted, a seismic survey can give geologists a picture of subsurface geological features, so that they may better identify those features capable of holding oil and gas. Drilling is extremely expensive, and ever more so as easily tapped reservoirs are exhausted and new reservoirs are harder to reach. Having an accurate picture of an area's subsurface features can increase the odds of hitting an economically recoverable reserve and decrease the odds of wasting money, and effort on a nonproductive well.
The principle behind seismology is deceptively simple. As seismic waves travel through the earth, portions of that energy are reflected back to the surface as the energy waves traverse different geological layers. Those seismic echoes or reflections give valuable information about the depth and arrangement of the formations, some of which hopefully contain oil or gas deposits.
A seismic survey is conducted by deploying an array of energy sources and an array of sensors or receivers in an area of interest. Typically, dynamite charges are used as sources for land surveys, and air guns are used for marine surveys. The sources are discharged in a predetermined sequence, sending seismic energy waves into the earth. The reflections from those energy waves or “signals” then are detected by the array of sensors. Each sensor records the amplitude of incoming signals over time at that particular location. Since the physical location of the sources and receivers is known, the time it takes for a reflection signal to travel from a source to a sensor is directly related to the depth of the formation that caused the reflection. Thus, the amplitude data from the array of sensors can be analyzed to determine the size and location of potential deposits.
This analysis typically starts by organizing the data from the array of sensors into common geometry gathers. That is, data from a number of sensors that share a common geometry are analyzed together. A gather will provide information about a particular spot or profile in the area being surveyed. Ultimately, the data will be organized into many different gathers and processed before the analysis is completed and the entire survey area mapped.
The types of gathers typically used include: common midpoint, where the sensors and their respective sources share a common midpoint; common source, where all the sensors share a common source; common offset, where all the sensors and their respective sources have the same separation or “offset”; and common receiver where a number of sources share a common receiver. Common midpoint gathers are the most common gather today because they allow the measurement of a single point on a reflective subsurface feature from multiple source-receiver pairs, thus increasing the accuracy of the depth calculated for that feature.
The data in a gather is typically recorded or first assembled in the offset-time domain. That is, the amplitude data recorded at each of the receivers in the gather is assembled or displayed together as a function of offset, i.e., the distance of the receiver from a reference point, and as a function of time. The time required for a given signal to reach and be detected by successive receivers is a function of its velocity and the distance traveled. Those functions are referred to as kinematic travel time trajectories. Thus, at least in theory, when the gathered data is displayed in the offset-time domain, or “X-T” domain, the amplitude peaks corresponding to reflection signals detected at the gathered sensors should align into patterns that mirror the kinematic travel time trajectories. It is from those trajectories that one ultimately may determine an estimate of the depths at which formations exist.
A number of factors, however, make the practice of seismology and, especially, the interpretation of seismic data much more complicated than its basic principles. First, the reflected signals that indicate the presence of geological strata typically are mixed with a variety of noise.
The most meaningful signals are the so-called primary reflection signals, those signals that travel down to the reflective surface and then back up to a receiver. When a source is discharged, however, a portion of the signal travels directly to receivers without reflecting off of any subsurface features. In addition, a signal may bounce off of a subsurface feature, bounce off the surface, and then bounce off the same or another subsurface feature, one or more times, creating so-called multiple reflection signals. Other portions of the signal turn into noise as part of ground roll, refractions, and unresolvable scattered events. Some noise, both random and coherent, is generated by natural and man-made events outside the control of the survey.
All of this noise is occurring simultaneously with the reflection signals that indicate subsurface features. Thus, the noise and reflection signals tend to overlap when the survey data is displayed in X-T space. The overlap can mask primary reflection signals and make it difficult or impossible to identify patterns in the display upon which inferences about subsurface geology may be drawn. Accordingly, various mathematical methods have been developed to process seismic data in such a way that noise is separated from primary reflection signals.
Many such methods seek to achieve a separation of signal and noise by transforming the data from the X-T domain to other domains. In other domains, such as the frequency-wavenumber (F-K) domain or the time-slowness (tau-P), there is less overlap between the signal and noise data. Once the data is transformed, various mathematical filter; are applied to the transformed data to eliminate as much of the noise as possible and, thereby to enhance the primary reflection signals. The data then is inverse transformed back into the offset-time domain for interpretation or further processing.
For example, so-called Radon filters are commonly used to attenuate or remove multiple reflection signals. Such methods rely on Radon transformation equations to transform data from the offset-time (X-T) domain to the time-slowness (tau-P) where it can be filtered. More specifically, the X-T data is transformed along kinematic travel time trajectories having constant velocities and slownesses, where slowness p is defined as reciprocal velocity (or p=1/v).
Such prior art Radon methods, however, typically first process the data to compensate for the increase in travel time as sensors are further removed from the source. This step is referred to as normal moveout or “NMO” correction. It is designed to eliminate the differences in time that exist between the primary reflection signals recorded at close-in receivers, i.e., at near offsets, and those recorded at remote receivers, i.e., at far offsets. Primary signals, after NMO correction, generally will transform into the tau-P domain at or near zero slowness. Thus, a mute filter may be defined and applied in the tau-P domain. The filter mutes, i.e., excludes all data, including the transformed primary signals, below a defined slowness value p
mute
.
The data that remains after applying the mute filter contains a substantial portion of the signals corresponding to multiple reflections. That unmuted data is then transformed back into offset-time space and is subtracted from the original data in the gather. The subtraction process removes the multiple reflection signals from the data gather, leaving the primary reflection signals more readily apparent and easier to interpret.
It will be appreciated, however, that in such prior art Radon filters, noise and multiple
McElheny Jr. Donald E.
Robinson John M.
Willhelm Keith B.
LandOfFree
Removal of noise from seismic data using improved tau-P filters does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Removal of noise from seismic data using improved tau-P filters, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Removal of noise from seismic data using improved tau-P filters will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3199317