Copyright @ Statcom Ltd. 2014
"Serving the Oil & Gas Industry for over 38 years"
Surface Consistent Amplitude Scaling
SCAMPS calculates trace amplitude scale factors from a matrix of trace amplitude spectra or a matrix of trace rms amplitudes. The amplitude spectra are obtained from shot-oriented traces that have been scaled previously by a constant factor and an exponential function to compensate for spherical divergence and inelastic attenuation. A design window that slopes with offset is used to specify that part of the data to be used in creating the spectrum. Window size compensation is applied so that varying window size does not weight the amplitude contribution of traces from various offsets.
SCAMPS simultaneously calculates the bias, shot, receiver, offset and CDP components of the trace amplitudes by iterating using the Gauss-Seidel method.
A specified band of frequencies is normally used to estimate the input trace rms amplitudes so that low or high frequency noise outside the band is not included in the calculations.
Traces which exceed a specified threshold of the standard deviation of the distribution of trace amplitudes may be rejected at specified iterations. These may later be flagged and optionally omitted in processing. Traces edited before analysis are not included.
After the input amplitudes are decomposed into each of the components, surface consistent scaling factors for each trace are calculated using the bias, shot, geophone and offset components. Optionally, the CDP component may also be included. After the traces are multiplied by all the scaling factors, the amplitude level of the signal that is determined to be surface consistent will be at a user specified value.
To maintain relative amplitudes in the data, subsequent processing to improve the spectral balance and signal-to-noise ratio should treat the signal in a surface consistent manner. Surface consistent amplitude analysis at this point should show that little residual amplitude scaling is required. However, a time of offset varying residual amplitude function may be derived and applied at this stage to compensate for other factors which effect factors which effect the amplitude in a line consistent manner.
Tander, T.and Koehler,F.,1981,Surface Consistent Corrections, Geophysics, v.46,p 17-22
Surface Consistent Deconvolution
SCDCON calculates surface consistent deconvolution operators for shot-oriented records from a matrix of trace amplitude spectra. The amplitude spectra are obtained from traces that have been scaled previously by an exponential function and surface consistent amplitude scaling factors. A design window that slopes with offset is used to specify that part of the data to be used in creating the spectrum. Window size compensation is applied so that varying window size does not weight the contribution of traces from various offsets.
SCDCON simultaneously calculates the bias, shot, receiver, offset and CDP components of the trace spectra by iterating using the Gauss-Seidel method. Most of the energy is contained in the bais component which is the spectrum that is consistent for the entire line, and a single wavelet for the line may be derived from this component. The other components contain distortions to the average wavelet which result from varying source and surface conditions.
Traces which have been rejected by the preceding SCAMPS analysis may be omitted from the calculation of the spectral components.
Decomposition is performed on each discrete frequency of the trace cepstra. Normally three iterations are performed. Optimized weighting factors are used in the iterations to increase the rate of convergence, and user supplied weights may also be used. Alpha trim calculations may optionally be used to calculate each component.
After the input spectra are decomposed into each of the components, surface consistent deconvolution operators for each trace are calculated using the bias, shot and geophone components. The phase component of the operator for each trace is derived from the minimum phase spectrum of the reconstructed components for that trace. Normally, no pre-whitening of the spectra is required or used in the calculation of the operator. Convolution with each operator may be performed in the time domain but is usually performed in the frequency domain where no truncation of the length of the operator is necessary.
The CDP and offset components are usually not applied in the normal surface consistent treatment since their use may distort AVO and subsurface information. If it is not critical that AVO information be preserved, the application process may include an option to adapt the amplitude component of the operator to each individual trace while maintaining the surface consistent phase solution. This adaptation will attempt to balance the non-surface consistent noise as well as the signal.
After deconvolution without the adaptive option, the random and source generated noise will remain and likely appear stronger because it is not part of the decomposed spectral components; only those frequencies that are in the applied components are balanced by the deconvolution. Non surface consistent noise should be attenuated by other processes such as ground roll equalization, filtering, ect.
Instrument and geophone dephasing or designature phasing may be applied along with the application of the deconvolution to produce an output that more closely resembles a zero-phase response to the reflection sequence.
Otis, R.M. and Smith, R.B., 1977, Homomorphic Deconvolution by Log Spectral Averaging, v.42, p 1146-1157.
Cary, P.W. and Lorentz, G.A., 1991, Aspects of Four Component Surface Consistent Processing, CSEG Convention Abstracts, p 57-58
Wang,X., 1992, Five Component Adaptive Surface Consistent Deconvolution, CSEG Convention Abstracts, p 35.
DEPHASE creates an operator which is used to alter the amplitude and/or phase characteristics of data obtained from a seismic recording system. The desired effect is to remove or alter the effects of the field recording system on the data so that the result of a subsequent deconvolution process that is applied to that data will more closely approximate a consistent representation of the reflection sequence.
The operator may be based on the recording instruments, the geophones, or both. The impulse response of that recording instruments is derived from the specification curves published by the instrument manufacturer, and the response of the geophone component is obtained from the resonant frequency and damping factor of the geophones.
Dephasing may be implemented in a variety of ways, depending on the desired output. Generally the process consists of creating an operator that shapes the impulse response of the recording system into a desired response. The desired output response (or model response) may be one of the following:
A zero-phase equivalent of the input.
The model has the same amplitude spectrum of the input, but it has a zero-phase spectrum. This is equivalent to having a recording system with a zero-phase response so that the phase of the original data is not altered but the amplitudes are filtered just as in the original recording. This method has been used extensively in the industry and generally gives good result, particularly when the system's low-cut is at or below the usable bandwidth of the data. In this case , most of the undesirable effects of the filter occur outside the frequency band of interest.
The model is a zero-phase spike which has a flat amplitude response. This is equivalent to removing the recording system effects entirely, also referred to as system inversion. Here the operator has the inverse amplitude spectrum and the reverse phase spectrum of the system. This option is used if the system exhibits considerable filtering effects on the frequency band of interest, such as with the use of geophones with a relatively high natural frequency. The disadvantage of this option is that undesirable effects such as ground-roll or unwanted high amplitude noise will be restored to its original level, and with the added distortion associated with the lost precision in the original filtering; the original signal may be impossible to recover fully.
The model is the system response of another recording system, instruments and/or geophones, and the process is referred to as response matching. In this case the rephasing operator has an amplitude spectrum that is the ratio of the desired and actual spectra, and a phase spectrum that is the difference between the desired and actual spectra. This option may be used, for example, to shape the input data recorded on one system to match some previous processing done to other data in the same project but that originated from another system (e.g. different instruments or filter settings) and that was processed without dephasing.
A Wiener-Levinson minimum phase equivalent of the input.
The model is a minimum phase equivalent of the recording system derived by a Wiener-Levinson least squares shaping filter. The effect is to replace the mixed phase recording system with an all-pole minimum phase system, and the resulting data is more suited to the subsequent use of minimun phase time domain spiking deconvolution. This process expects that the system and the data can be approximated by an all-pole autoregressive model.
A Designature minimum phase equivalent of the input.
The model is a minimum phase equivalent of the recording system derived by the Hilbert transform technique. Data dephased by this operator is more suited to the use of a minimum phase frequency domain deconvolution. This is one step in the DESIGNATURE process that consists of converting all input data to minimum phase and subsequently applying a minimum phase statistical deconvolution operation that is based on signal detection in the presence of noise, such as surface consistent deconvolution. This process is more general than the least squares process in that since it uses a moving average autoregressive model whose transfer function may contain zeros as well as poles and can more accurately represent seismic recording systems and data.
For vibroseis data, another step in the DESIGNATURE process is the conversion of the correlated field data to its minimum phase equivalent before deconvolution. This may be done by convolving the correlated vibroseis data with a dephase correction operator which may be generated using the Wood's technique. This is done by the following method:
The Klauder wavelet (auto-correlation of the sweep) is obtained from a recorded sweep or a recorded auto-correlation trace.
The wavelet is deconvolved with a minimum phase deconvolution operator whose amplitude spectrum is the inverse of the wavelet.
The result of the deconvolution is flipped in time to produce the DEVIB operator.
If the field data is correlated with a sweep that has also passed through the instruments (which is normally the case for vibroseis data correlated in the field), the resulting data has already been instrument dephased by the correlation process and geophone dephasing should be applied.
The use of the vibroseis correction operator in the processing of vibroseis data will result in a section that more closely matches the phase of a conventional dynamite section.
Berkout, A.J., 1977, Least-Squares Inverse Filtering and Wavelet Deconvolution, Geophysics, v.42, p 1369.
Ristow, D. and Jurczyk, D., 1975, Vibroseis Deconvolution, Geophysical Prospecting 23, p 363-379.
Wood, L.C. et al., 1978, The Debubbling of Marine Source Signatures, Geophysics, v. 34, p 715-729.
Yilmaz, O., 1987, Seismic Data Processing, Society of Exploration Geophysicists.
Time-Offset Residual Amplitudes
TORA calculates and applies a matrix of time and offset residual amplitude scaling factors that are line consistent and are derived after the application of surface consistent amplitude and deconvolution processing.
Many factors influence the amplitudes which are present on prestack seismic data, and not all effects may be properly corrected so that the resulting section may not represent accurately the amplitude variations sue to subsurface geology alone. Initial theoretical amplitude scaling to compensate for spherical divergence and inelastic attenuation is usually based on the analysis of a single (usually large) trace window assumes a stationary wavelet. Other factors such as transmission losses, emergence angles, array attenuation, surface tension, structure, event tuning, mode conversion, multiples, noise... ect. may each influence the amplitude and perhaps to a greater extent than the subtle AVO effects that are expected.
TORA scaling is used to reduce the influence of these and other factors which exhibit a consistent pattern throughout the entire line. A matrix of time and offfset dependent amplitude scalers is derived from the data, usually after the application of a bandpass filter to reduce the effects of frequency dependent noise that may still be present in the data set. The scaler matrix may be displayed in trace form as a quality control feature.
Time Parabola Radon Transform Filter (Demultiple)
TPFILT is a time domain implementation of a time-parabola radon transform filter. The input data, static and nmo corrected pre-stack gathered traces, may be approximated as a linear super-position of many parabolic events of constant amplitude in the presence of additive noise. The weighting coefficients of these parabolas make up a particular form of the radon transform, the TP transform, where T is the two-way zero-offset time and the ray parameter P is the moveout at the maximum offset. This transform may be used to perform filtering if the inverse transform yields a sufficiently accurate representation of the input data. Filtering may take the form of noise reduction, long period multiple attenuation, or both.
A description of the transform and filtering process is shown here using a synthetic common offset gathered record Figure 1 (figures not present, please call for brochure) which consists of several flat or parabolic events in the presence of some linear and random noise. A synthetic record rather than a real record is used here to show more clearly the response of the transform filter. The parabolic events on the synthetic have moveouts of -15, 15, 30, 60, 80, 100, and 200 ms. at the maximum offset. A forward TP transform of this record is shown in Figure 2. The transform is applied over a time window from 200 - 900 ms. and is modeled using parabolas with moveouts of -48 to +228ms., at the maximun offset. The amplitude at each sample of a component TP trace is the weighting coefficient of a parabola of that moveout at that time. If the moveout value of a particular event does not match exactly the moveout of one of the modeled moveouts, that event exhibits smearing over several TP components.
Lateral smear may also be present for several other reasons which include: Non-parabolic events. These events do not exactly fit the modeled events and a portion of their energy if allocated to several moveouts or to noise.
Amplitude changes with offset. These amplitude changes do no fit the modeled events and, as in (a), this energy is allocated to several moveouts in an attempt to model that data as resulting from the interference patterns of many other constant amplitude parabolas.
The type of transform. The resolution of a transform method is dependent upon its ability to cluster the energy.
There are many possible methods used to create the TP domain record. Some of these are: The conventional time-domain TP transform which is formed by simply stacking along parabolic trajectories (i.e. constant moveout stacks). It exhibts severe lateral smearing of events, and its inverse transform does not accurately represent the input data. This imperfect transform results mainly from the fact that several trajectories all share the same energy at near offsets. Offset weighted transforms also exhibit this problem, but to a lesser extent.
The generalized inverse is a more perfect transform in that the shared energy is distributed among all TP components , not duplicated. The inverse transform is then a more accurate representation of the input record (in a least squares error sense). However, if least square error is the only criteria for building the transform, more smearing may occur by creating multiple-like energy patterns which are not possible in a geophysical model but which mathematically resolve more input energy.
The stochastic inverse incorporates a sparseness factor (similar to the prewhitening factor in deconvolution) which forces the events to be locally focused and results in an accurate inverse transform. The method however, does require appriori information about the system in order to control the sparseness.
The TPFILT inverse uses an ordered and weighted iterative Gauss-Seidel method in the time domain to distribute the energy. This method also imposes additional constraints on allocating energy to multiple moveouts by restricting all amplitudes and positions to those which are realizable in a geophysical sense. Using the time domain also allows for a time variant transform that does not require the modeling of front end refracted noise or a front end muted zone as the result of interference of artificially created parabolas which are introduced along with real events in the shallow near-trace section.
The reverse transform converts the TP (time-parabola) representation of the data back to the TX (time-distance) orientation , as shown in Figure 3. This is done by stacking across the TP domain along in inverse linear trajectory whose moveout is determined by the distance of the desired output trace. This process is called inverse stacking.
Any type of TP transform in not an exact transform in that the inverse does not result in an exact redering of the input data. The system is over-determined and under-constrained and this results in an imperfect transform. Noise for example does not model and is not transformed. This signal not transformed may be examined by substracting the reverse TX transformed data from the original record. Figure 4 shows the residual record for the example used here. The inability to transform noise makes this a useful tool for noise reduction. A percentage of the noise free data may be combined with a portion of the original data (as addback) to produce a noise reduced record, and the results of this procedure on our example is shown in Figure 5.
A partial inverse transform may be produced by inverse stacking over only those P-traces with a specified range of parabolic moveouts. This may be employed as a method of long-period multiple attenuation by producing a multiple-only record which is subtracted from the input data. Figure 6 shows a partial inverse transform which consists of parabolic energy from 24 to 228 ms. only, and this record is subtracted from the input to produce the multiple attenuated record shown in Figure 7.
As well, TP noise reduction and TP multiple attenuation may be applied together. The result of both processes is shown in Figure 8.
Since the amplitude associated with any parabola is consistent over all offsets, the TP method of multiple attenuation has as advantage over an FK method which is also based on differential moveout but which fails to attenuate the multiples at near offsets where the moveout differences between primaries and multiples are negligible.
An extreme exception to constant amplitude events are converted waves which have little or no energy at near offsets. In this case, TP multiple attenuation is inappropriate because reversed events are introduced in the near offsets by a combination of parabolic events which attempt to model the bulk of the converted energy as multiple.
When using multiple attenuation, care must be taken to ensure that valid primary energy is not being attacked too severely. The strong clustering of TPFILT is desirable for removing multiples that may be close (within 24 ms.) to the primaries , but the velocity function applied to the input data must be as exact as possible. In areas of strong multiples, a mild TP multiple attenuation applied to common offset records prior to final velocity analysis is recommended.
The TP multiple attenuation is applied to CDP gathered data rather than shot oriented data since it is less affected by structure of the primary or multiples and the apex of the parabolic events is more likely to be at the zero-offset position. The gathered data should have all static corrections applied since residual static patterens will deteriorate the transform.
More appropriate noise reduction requires the use of both positive and negative move-outs, and a linear ray parameter transform would have a more desirable spacially invariant and symmetric impulse response. The radon transform cannot accurately represent some features such as sharp discontinuities, diffractions or amplitude variation with offset. Too high a percentage of noise reduction may tent to attenuate useful data, and so a relatively small amount (33% or less) is recommended.
Similar tests to the ones shown here may be carried out on common offset panels from real data sets to determine the appropriate parameters for a specific area.
Thorson, J.R. and Claerbout, J.F., 1985, Velocity-Stack and Slant Stack Stochastic Inversion: Geophysics, v.50, p2727-2741.
Hampson, D., 1986, Inverse Velocity Stacking for Multiple Elimination: CSEG Journal, v.22, p 44-55.
Beylkin, G., 1987, Discrete Radon Transform: IEEE Transactions on Acoustics, Speech, and Signal Processing, v.assp-35 no.2, p 162-172