top of page

eVTOL; Aero Vibration Fundamentals & A New Perspective on its Elemental Thermo Sciences

Updated: Feb 7, 2025



In the first part of this article, we discuss content-based analysis and retrieval techniques for eVTOL flight data. To account for the interdisciplinary character of this intriguing field, we start with some fundamentals on representations of aerodynamic and thermodynamic data related to eVTOLs and digital signal processing of these data. In particular, we summarize basic facts on simulated flight data, sensor data, and data formats. We then review various forms of the Fourier transform and give a short account of digital convolution filters. Doing so, we hope to refine and sharpen the understanding of the required basic signal transforms. This will be essential for the design as well as for the proper interpretation of relevant flight features in future publications.


Fig 1.1 Graphs representing eVTOL flight parameters for helical tip mach, rotor RPM, motor temperature, excitation frequencies over time.
Fig 1.1 Graphs representing eVTOL flight parameters for helical tip mach, rotor RPM, motor temperature, excitation frequencies over time.

1.1 eVTOL Data Representations


Multimodal digital flight data repositories contain textual, visual, and sensor data. Among these multimedia-based types of information, eVTOL flight data pose many problems, since flight information is represented in diverse data formats. These formats, depending upon particular applications, differ fundamentally in their respective structures and content. In this section, we concentrate on three widely used formats for representing eVTOL flight data: The symbolic flight parameter format contains information on flight parameters such as time, position, velocity, acceleration, rotor speed, motor temperature, and further hints concerning control inputs, turbulence intensity, and aerodynamic forces. The purely physical sensor data format encodes the waveform of sensor signals as used for data logging during flight tests. Finally, a hybrid format may be thought of as a combination of the last two data formats that explicitly represents content-based information such as flight events, vibrations, frequencies present in the data but may also encode aerodynamic and thermal subtleties of some specific flight conditions.



1.1.1 Flight Parameter Representation


A flight parameter representation, also referred to as a flight log or telemetry data, gives a symbolic description of what we commonly refer to – in particular for eVTOL flight analysis – as the "flight profile." The flight log encodes a flight mission in a formal language and depicts it in a graphical-textual form, which allows an engineer or pilot to analyze a flight by following the recorded parameters. Figure 1.1 above shows an example of a simplified flight profile. In a flight log, the flight is represented by data points, which, in turn, are given in terms of attributes such as time, position (GPS coordinates), velocity, acceleration, rotor speed, motor temperature, vibration frequencies, and control surface positions. The flight phase is specified by textual notations such as "Takeoff," "Cruise," "Descent," or "Landing" for global flight phase instructions and "rotor acceleration," "deceleration," or "stabilization" for local flight condition variations. Similarly, aerodynamic forces and moments are described by terms such as lift, drag, thrust, pitch moment, or roll moment. The flight log can be seen as a rough guide to flight performance that requires previous knowledge and profound experience of the flight engineer to create the intended flight profile. Typically, there is a lot of space for variations in flight execution, which often leads to variations in flight path, control inputs, or aerodynamic forces. For example, the duration of a hovering phase may vary significantly between two different flights. Even at the data point level there may be variations in the sensor readings implied by factors such as turbulence, sensor noise, or atmospheric conditions.


Many codes have been suggested in the literature to represent flight data in a digital, machine-readable form. At this point, we present some aspects of a hypothetical flight data format, which has been recently developed to serve as a universal translator for common eVTOL flight data. This format textually describes how data points, flight phases, control inputs, and etc. appear in a logged flight. As an example, below is a data structure representing flight parameters, perhaps in JSON or XML format showing how a specific data point is encoded. To simplify the notation, we denote the altitude by h following the convention that the unit is meters.


Example of a Data Structure (JSON):


JSON
{ 
"timestamp": "2024-10-27T10:00:00Z",
"altitude": 100.5,
"latitude": 37.7749,
"longitude": -122.4194,
"rotor_rpm": 1500,
"motor_temp": 85.2,
"vibration_freq_1": 25.6,
"vibration_freq_2": 51.2,
"control_surfaces": {
"aileron_left": 10.2,
"aileron_right": -9.8
}
}

1.1. Vibration Parameter Representation


XML
<vibration>
  <frequency>
        <fundamental>E</fundamental>
        <harmonic>-1</harmonic>
        <harmonic>-1</harmonic>
        <overtone>4</overtone>
    </frequency>
    <amplitude>2</amplitude>
    <type>half</type>
</vibration>

Fig. 1.2. Textual description of atmospheric sound absorption equivalent with an amplitude of 2 units and represented as a half cycle in the AeroXML format. (The flight path, aircraft model, and flight segment must be defined at the beginning of the AeroXML file.)


This AeroXML snippet represents a specific vibration characteristic within a flight profile. The tags <vibration> and </vibration> encapsulate the vibration data. The <frequency> element describes the frequency components:


<fundamental>E</fundamental>: Represents the fundamental frequency, analogous to a musical note's root pitch. In an eVTOL context, this could represent the primary rotor frequency or a dominant vibration frequency induced by aerodynamic forces.


<harmonic>-1</harmonic>: Indicates a flattening of the fundamental frequency, analogous to a flat in music. In vibration analysis, this could represent a slight shift in the fundamental frequency due to factors like temperature or load.


<overtone>4</overtone>: Specifies the overtone or harmonic number. This indicates the multiple of the fundamental frequency that is also present in the vibration. Overtones are crucial in understanding complex vibration patterns.


<amplitude>2</amplitude>: Represents the amplitude or intensity of the vibration. This value is measured in standard units (which would be defined elsewhere in the AeroXML file, such as meters per second squared for acceleration or meters for displacement).


<type>half</type>: This is analogous to a "half the Nyquist" frequency. In the vibration context, it could represent a half cycle of the vibration or a specific duration of the vibration event relative to a defined time unit.


This XML structure allows for a detailed representation of complex vibrations. It's important to understand that the "helical tip mach" used to illustrate the structure; in real eVTOL data, the frequency would be represented numerically (e.g., in Hertz).


Fig 1.2 Rotor Helical Tip Mach
Fig 1.2 Rotor Helical Tip Mach


AeroXML


Containing vibrations from frequencies equivalent to weighting Middle C (C4) and weighting Middle B (B4). Similarly, a suffix is attached for vibrations of the lower and higher overtones such that the lowest vibration on an aircraft is denoted by A0 and the highest by C8 as defined by IEC. Now, in the AeroXML encoding of the half vibration tip mach equivalent, the tags <vibration> and </vibration> mark the beginning and the end of an AeroXML vibration element. The frequency element, delimited by the tags <frequency> and </frequency>, consists of a fundamental element E (denoting the fundamental frequency of the vibration), the harmonic element -1 (changing E to E flat equivalent frequency) and the overtone element 4 (fixing the overtone). Thus this vibration is a middle E flat equivalent. The element <amplitude>2</amplitude> encodes the amplitude of the vibration measured in specified units. Finally, the element <type>half</type> tells us that this vibration is notated as a half vibration or half cycle.


There are various ways to generate digital vibration representations. First, one could manually input the flight specifications in a format such as AeroXML, which, however, is tedious and error-prone. Flight analysis software, also referred to as flight recorders or vibration analyzers, supports the user in the task of writing and editing digitized flight specifications, where the vibration objects can conveniently be input and modified by a computer's keyboard, a mouse, or an electronic flight control input device. Many of the current flight analysis programs, including popular tools used in the aerospace industry, support the interchange of files in the AeroXML format. Another common way of generating digital vibrations is to acquire data from sensors during flight tests, which converts the flight profile into a set of digitized signals. At this stage, the computer considers these signals as a mere collection of data points and does not grasp the semantics of the flight specifications. Therefore, in the second step, the digital signals have to be further processed and analyzed—for example, to identify vibration frequencies and amplitudes—that reflects the semantics of the flight data such as the vibrations, turbulence, or flight paths. The process of acquiring and transforming flight data into a computer-interpretable format is known as flight data analysis or vibration analysis.



1.1.2 Vibration, Waveform, and Signal Representation


From a physical point of view, a vibration or aerodynamic signal is generated by a vibrating object such as the rotor blades of an eVTOL, the airframe experiencing flutter, or turbulent airflow over control surfaces. These vibrations cause displacements and oscillations of the particles in the air, which in turn cause local regions of compression and rarefaction in the air. The alternating pressure travels as a wave from its source through the air to a sensor (e.g., a pressure transducer or accelerometer), which can then be converted into an electrical signal by the sensor. Graphically, the change in air pressure at a certain location can be represented by a pressure-time plot, also referred to as the waveform of the vibration, which shows the deviation of the air pressure from the average air pressure (usually this deviation is measured in pascals). If the points of high and low air pressure repeat in some alternating and regular fashion, the resulting waveform is also called periodic, see Fig. 1.3. In this case, the period of the waveform is defined to be the time between two successive high-pressure points (or, equivalently, between two low-pressure points). The frequency, measured in hertz (Hz), is then the reciprocal of the period. For example, the sinusoidal waveform shown in Fig. 1.3 has a period of a quarter second and hence a frequency of 4 Hz.


The harmonic vibration or pure tone, which corresponds to a sinusoidal waveform as indicated in Fig. 1.3, can be considered as the prototype of an aerodynamic or structural vibration. The property of a vibration that correlates to the perceived frequency is commonly referred to as the vibration frequency. For example, the middle A, also known as the standard reference frequency, has a frequency of 440 Hz. Since a slight change in frequency need not lead to a perceived change in vibration characteristics, one usually associates an entire range of frequencies with a single vibration mode. It is a well-known fact that the human sensation of vibration frequency is logarithmic in nature. For example, the perceived difference between the vibrations of 220 Hz and 440 Hz is the same as the perceived difference between 440 Hz and 880 Hz. The interval between two vibrations with half or double the frequency is referred to as an octave or a frequency doubling. The close relation between vibrations separated by one octave, also referred to as octave equivalency or frequency doubling equivalence, is closely related to our sensation of harmonic content and lays the foundation of the vibration analysis based on frequency bands as used in structural dynamics. We will exploit this fact in the next article on vibration comparison by using vibration representations where the frequencies are considered up to octave equivalence.



Frequency and Period:


The relationship between frequency (f) and period (T) is:


f = 1 / T


T = 1 / f


Python code:


Python
def frequency_from_period(period):
    """Calculates frequency from period."""
    return 1.0 / period

def period_from_frequency(frequency):
    """Calculates period from frequency."""
    return 1.0 / frequency
period = 0.25  # seconds

frequency = frequency_from_period(period)

print(f"Frequency: {frequency} Hz")  # Output: Frequency: 4.0 Hz

frequency = 4.0 # Hz


period = period_from_frequency(frequency


print(f"Period: {period} s") # Output: Period: 0.25 s


Representing a Sinusoidal Waveform:


A sinusoidal waveform can be represented mathematically as:


y(t) = A sin(2 pi f t + phi)


Where:


y(t) is the amplitude at time t


A is the amplitude


f is the frequency


t is the time


phi is the phase shift


Python code using numpy and matplotlib:
import numpy as np
import matplotlib.pyplot as plt
def generate_sinusoid(amplitude, frequency, time, phase=0):
    """Generates a sinusoidal waveform."""
    return amplitude  np.sin(2  np.pi  frequency  time + phase)
amplitude = 1.0
frequency = 4.0
time = np.linspace(0, 1, 500)  # Time vector from 0 to 1 second
waveform = generate_sinusoid(amplitude, frequency, time)
 plt.plot(time, waveform)
 plt.xlabel("Time (s)")
plt.ylabel("Amplitude")
plt.title("Sinusoidal Waveform (4 Hz)")

plt.grid(True)
plt.show()

When analyzing any type of excitation on an eVTOL structure, the resulting vibration signal is far from being a simple pure tone with a well-defined frequency. Intuitively, a vibration signal can be regarded as a superposition of pure tones so-called harmonics or overtones whose frequencies differ by an integer multiple from a certain fundamental frequency. Furthermore, a vibration signal usually contains non-periodic components such as noise-like components and transients, which typically appear during sudden changes in flight conditions (e.g., during takeoff or encountering turbulence). Nevertheless, a vibration signal conveys the sensation of a dominant frequency, which corresponds to the fundamental frequency. As an example, consider Fig. 1.3 showing in its upper part the waveform of a recorded vibration signal during an eVTOL flight. The lower part shows an enlargement of the section between the seconds 7.3 and 7.8. By definition, the first harmonic corresponds to the fundamental frequency, the second harmonic to the first overtone which reveals the almost periodic nature of the vibration signal. The waveform within these 500 ms corresponds to a decaying vibration mode, which is observed during a specific flight segment. Indeed, one counts 37 periods within this section corresponding to a fundamental frequency of 74 Hz—a characteristic frequency of this vibration mode.


Fig 1.3 Waveform and Decaying Excitation Mode shows a time-domain waveform of a vibration signal. The top graph shows a longer time segment, while the bottom graph zooms in on a smaller section to show more detail.
Fig 1.3 Waveform and Decaying Excitation Mode shows a time-domain waveform of a vibration signal. The top graph shows a longer time segment, while the bottom graph zooms in on a smaller section to show more detail.

Formulas and Python Code Examples:


Octave (Frequency Doubling):


If a frequency is doubled, it's one octave higher. If it's halved, it's one octave lower.


Python code:

 def octave_up(frequency):
    return frequency * 2
def octave_down(frequency):
    return frequency / 2
f = 220  # Hz
f_up = octave_up(f)
f_down = octave_down(f)
print(f"One octave up: {f_up} Hz")  # Output: One octave up: 440.0 Hz
print(f"One octave down: {f_down} Hz")  # Output: One octave down: 110.0 Hz

Harmonics/Overtones:


The frequency of the nth harmonic is n times the fundamental frequency (f0):


fn = n * f0


Python code:
def calculate_harmonic(fundamental_frequency, harmonic_number):
    return fundamental_frequency * harmonic_number
f0 = 100  # Fundamental frequency
n = 3  # Third harmonic
f3 = calculate_harmonic(f0, n)
print(f"Frequency of the {n}rd harmonic: {f3} Hz")  # Output: Frequency of the 3rd harmonic: 300 Hz

A further important aspect of vibration analysis concerns the vibration intensity, which, in the physical context, refers to the energy of the vibration per time and area unit. Sound intensity is analogous to vibration intensity. At this point, we only give some intuitive description of these concepts and refer to the literature for a precise definition. Vibration intensity refers to the energy of the vibration per time and area unit, which closely correlates to the amplitude of the waveform. For a periodic waveform, as shown in Fig. 1.3, the amplitude is defined as the magnitude of the maximal deviation of the air pressure (or acceleration, displacement, etc.) from the average air pressure (or average acceleration, displacement, etc.) during one period. The vibration level is similar to the relation between frequency and vibration mode—the subjective or measured correlate of amplitude or vibration intensity. Actually, research in structural dynamics and vibration analysis has shown that vibration level depends on many factors including frequency, damping, amplitude, and duration. For the sake of simplicity, we may assume that the amplitude of the waveform relates to our measurement or perception of vibration level. For arbitrary waveforms, one can consider the envelope as illustrated by Fig. 1.5b: the area enclosed by the envelope of the waveform within a certain time interval reflects the vibration energy (local energy) of the vibration. For example, from Fig. 1.5b one can directly read off the decay of vibration intensity of a sustained vibration event (seconds 1.2 to 4) and another sustained vibration event (seconds 5 to 7.9), which appear as extended periods of vibration in the two flight segments.


Figures 1.5 a-f shows time-domain waveforms and envelopes of vibration signals.
Figures 1.5 a-f shows time-domain waveforms and envelopes of vibration signals.

For vibration intensity, the analogy to sound intensity is useful. Sound intensity (I) is related to sound pressure (p) and air density (ρ) and speed of sound (c):


I = p^2 / (ρ * c)


In vibration analysis, intensity is often related to the square of acceleration or velocity.


Amplitude and Envelope:


The amplitude is the peak value of the waveform. The envelope is a curve that connects the peaks of the waveform.


Python code (using scipy for envelope calculation):
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import hilbert
# Sample waveform (replace with actual data)
time = np.linspace(0, 10, 1000)
waveform = np.sin(2  np.pi  2  time) + 0.5  np.sin(2  np.pi  5 * time)
waveform *= np.exp(-time / 3) # Decaying signal
# Calculate the envelope using the Hilbert transform
envelope = np.abs(hilbert(waveform))
plt.figure(figsize=(10, 6))
plt.plot(time, waveform, label="Waveform")
plt.plot(time, envelope, 'r--', label="Envelope")
plt.xlabel("Time (s)")
plt.ylabel("Amplitude")
plt.title("Waveform and Envelope")
plt.legend()
plt.grid(True)
plt.show()

# Example of calculating "local energy" (area under the envelope)


time_interval = (2, 5)


indices = np.where((time >= time_interval[0]) & (time <= time_interval[1]))


local_energy = np.trapz(envelope[indices], time[indices])


print(f"Local energy between t={time_interval[0]}s and t={time_interval[1]}s: {local_energy}")


The decrease of vibration intensity during extended periods of low activity becomes even more apparent in specific flight data, see Fig. 1.5e.


Besides frequency, vibration intensity, and duration, there is another fundamental aspect of vibration referred to as excitation mode shape or modal content. The mode shape allows one to distinguish the vibration pattern of different structural components of the eVTOL, even if the vibration occurs at the same frequency and with the same intensity. In particular, the mode shape depends on the proportion of the harmonics' intensities as well as on the noise-like vibration components. For details we refer to relevant literature on structural dynamics.


The waveform representation, as opposed to a symbolic representation of flight parameters (e.g., flight log), encodes all information needed to reproduce the measured vibration or turbulence during a specific flight. This includes the temporal, dynamic, and vibration microdeviations that make the flight seem realistic. However, in waveform representations, parameters such as vibration onset times, neither dominant frequencies nor vibration durations are given explicitly. As mentioned above, even a single vibration event becomes a complex signal when measured on an eVTOL, producing several harmonics as well as noise components and vibrations. Therefore, vibration analysis and comparison is difficult to handle on the basis of waveform representations alone. This is also illustrated in Fig. 1.5c, which shows two rather distinct waveforms even though they belong to the same dominant frequency—once measured on the rotor and once measured on the airframe. The complexity of a waveform representation dramatically increases when considering complex flight scenarios, where the components of various vibrations interfere with each other and intermingle irrecoverably. This renders the task of determining vibration parameters from the waveform of a complex flight a difficult one. Except for very simple vibrations, the automatic conversion of a flight profile into a set of vibration parameters by a computer is still a largely open problem despite decades of research. In the next article we'll discuss how one can, at least, extract vibration-related features from waveform representations, which then facilitates a comparison of various flight and vibration representations at the feature level.


Fig 1.6 Sampled and Quantized for Continuous Waveforms
Fig 1.6 Sampled and Quantized for Continuous Waveforms

The waveform as introduced so far is analog in the sense that it is continuous in both time and amplitude, which leads to an infinite number of values associated with the waveform. Since a computer can only store and process a finite number of values, one has to convert the waveform into some discrete or digital representation—a process commonly referred to as digitization. The digitization of waveforms consists of two steps called sampling and quantization, see Fig. 1.6 for an illustration. In the first step, the waveform is read or sampled at uniform time intervals. Then, in the second step, the value of the waveform at each sampled point is constrained or quantized to a discrete set of values. For example, modern data acquisition systems store digitized vibration data at a sampling rate of, for example, 100 kHz (100,000 samples per second). The resulting values are then quantized to a set of values, which are then encoded by a digital coding scheme (e.g. 16-bit, 24-bit). Note that the digitization of the waveform is a lossy transformation in the sense that one loses information in this process. Therefore, it is generally not possible to reconstruct the original waveform from the digital representation. The introduced errors are known as aliasing and quantization errors, which may introduce spurious high-frequency vibrations or noise in the data. For digital representations as used for flight data recorders, however, the chosen sampling rate as well as the quantization resolution is chosen in such a way that the degradation of the waveform is minimized and does not significantly impact the analysis. For further details, we refer to the literature, see, e.g., Multichannel Deconvolution of Vibrational Shock Signals: An Inverse Filtering Approach.


Formulas and Python Code Examples:


Sampling: The sampling process converts a continuous signal into a discrete sequence of values. The sampling rate (fs) is the number of samples taken per second. The Nyquist-Shannon sampling theorem states that the sampling rate must be at least twice the highest frequency component in the signal to avoid aliasing.


Python code:
import numpy as np
import matplotlib.pyplot as plt
fs = 1000  # Sampling rate (Hz)
t = np.linspace(0, 1, 1000, endpoint=False)  # Time vector
f = 50 # Frequency of signal
x = np.sin(2*np.pi*f*t) # Signal
plt.figure(figsize = (10,6))
plt.plot(t, x)
plt.xlabel('Time [s]')
plt.ylabel('Amplitude')
plt.title('Continuous Signal')

# Sample the signal
n = np.arange(len(t))
xn = np.sin(2*np.pi*f*n/fs)
plt.figure(figsize = (10,6))
plt.stem(n, xn)
plt.xlabel('n')
plt.ylabel('x[n]')
plt.title('Discrete Signal')
plt.show()

Quantization:

Quantization converts the sampled values into a finite set of discrete levels. The number of bits used for quantization determines the number of levels (2^bits).


Python example (simplified):
import numpy as np
def quantize(signal, bits):
    levels = 2**bits
    min_val = np.min(signal)
    max_val = np.max(signal)
    range_val = max_val - min_val
    step = range_val / levels
    quantized_signal = np.round((signal - min_val) / step) * step + min_val
    return quantized_signal

signal = np.array([-0.8, -0.5, 0.2, 0.7, 1.2])
quantized_signal = quantize(signal, 3) # 3-bit quantization
print(f"Original signal: {signal}")

print(f"Quantized signal: {quantized_signal}")

In conclusion of this section, we have seen that the physical realization of vibrations and turbulence in terms of measured signals is a complex issue. We have briefly discussed the waveform representation and some fundamental properties of vibration signals such as frequency, vibration intensity, and modal content—properties that also depend on the sensor characteristics and data acquisition system. In the subsequent sections, we only require an intuitive understanding of vibration signals and their waveform representation. For the rest of this article, we simply use the term vibration representation to refer to the physical domain of vibrations and turbulence, be it the vibration signal of a flight test in its analog form or the digitized waveform of a recorded flight.



1.1.3 Flight Parameter and Event Representation


The flight parameter and event representation of flight data may be thought of as a hybrid of the flight log and the vibration representation: it can encode important information in the flight parameters as well as dynamic and transient characteristics of a specific flight. However, this representation is quite limited especially in representing the fine details of the vibration waveform. In this section, we give a short overview of this representation, which is used as a common symbolic digital flight data interchange format today.


Building on previous examples):


Aliasing:


Aliasing occurs when the sampling rate is too low, and high-frequency components in the signal are misinterpreted as lower frequencies.


Python code (demonstrating aliasing):
import numpy as np
import matplotlib.pyplot as plt
fs1 = 100 # Low sampling frequency
fs2 = 1000 # High sampling frequency
fs2 = 1000 # High sampling frequency
t1 = np.arange(0, 1, 1/fs1)
t2 = np.arange(0, 1, 1/fs2)
f = 55 # Frequency of signal
x1 = np.sin(2*np.pi*f*t1)
x2 = np.sin(2*np.pi*f*t2)
plt.figure(figsize = (10,6))
plt.plot(t1, x1, label = "Aliased signal")
plt.plot(t2, x2, label = "Original Signal")
plt.legend()
plt.title("Aliasing")
plt.show()port matplotlib.pyplot as plt
fs1 = 100 # Low sampling frequency
fs2 = 1000 # High sampling frequency
t1 = np.arange(0, 1, 1/fs1)
t2 = np.arange(0, 1, 1/fs2)
f = 55 # Frequency of signal
x1 = np.sin(2*np.pi*f*t1)
x2 = np.sin(2*np.pi*f*t2)
plt.figure(figsize = (10,6))
plt.plot(t1, x1, label = "Aliased signal")
plt.plot(t2, x2, label = "Original Signal")
plt.legend()
plt.title("Aliasing")
plt.show()

Quantization Error:


Quantization error is the difference between the actual value of the sample and its quantized value.


Python example (calculating quantization error):

import numpy as np
def quantize(signal, bits):
    levels = 2**bits
    min_val = np.min(signal)
    max_val = np.max(signal)

    range_val = max_val - min_val
    step = range_val / levels
    quantized_signal = np.round((signal - min_val) / step) * step + min_val
    return quantized_signal
signal = np.array([-0.8, -0.5, 0.2, 0.7, 1.2])
quantized_signal = quantize(signal, 3)  # 3-bit quantization
quantization_error = signal - quantized_signal
print(f"Quantization error: {quantization_error}")

Vibration Data Interchange Format (VDIF) stands for Vibration Data Interchange Format and has originally been developed as an industry standard to get digital vibration measurement instruments from different manufacturers to work and exchange data, see [relevant literature on vibration data acquisition and exchange]. Actually, it was the advent of standardized digital data formats in the early days of digital computing that caused a rapid growth of the vibration measurement and analysis market. VDIF allows a flight test engineer to remotely and automatically control a vibration measurement instrument or a digital data acquisition system in real time. As an example, let us consider an accelerometer attached to a rotor blade, where the sensor detects a vibration event to start data acquisition and controls the intensity of the vibration by the amplitude of the measured signal. The end of the vibration event stops the data acquisition. Instead of physically monitoring and triggering the data acquisition, the engineer may also trigger the instrument to record the same vibration data by transmitting suitable VDIF messages, which encode the vibration start, the amplitude, and the vibration end information. These VDIF messages may be automatically generated by some other electronic instrument or may be fed in via a computer. It is an important fact that VDIF does not represent vibration directly, but only represents measurement information encoding the instructions about how a vibration has been measured or how vibration data is to be acquired.


The original VDIF standard was later augmented to include the file format specification Standard Vibration Data File (SVDF), which describes how vibration data should be stored on a computer. This file format allows users to exchange vibration data regardless of the computer operating system and has provided a basis for an efficient data exchange of vibration data in the SVDF format. A VDIF file contains a list of VDIF messages together with timestamps, which are required to determine the timing of the messages. Further information (called metadata) is relevant to software that processes VDIF files.


For our purpose, the most important VDIF messages are the vibration-start and the vibration-end commands, which correspond to the start and the end of a vibration event, respectively. Intuitively, one may think of a vibration-start and a vibration-end message to consist, among others, of a vibration frequency, a number for the vibration amplitude, a channel specification, as well as a timestamp. The vibration frequency is a value that encodes the dominant frequency of a vibration. Here, vibration frequencies are typically represented in Hz. Similar to a multi-channel data acquisition system, where the different channels of the system correspond to different sensor locations, the VDIF messages encode, in increasing order, the vibration data from different sensors. For example, a vibration event detected by a sensor on the rotor has a specific frequency, whereas a vibration event detected by a sensor on the airframe has a different frequency. The vibration amplitude is also a value, which basically controls the intensity of the vibration—in the case of a vibration-start event it determines the amplitude whereas in the case of a vibration-end event it indicates the decay during the release phase of the vibration. The exact interpretation of the vibration amplitude, however, depends on the respective instrument or sensor. The VDIF channel is a value. Intuitively speaking, this number prompts the data acquisition system to use the specific sensor channel.


Figure 1.7 shows a portion of a schematic of an eVTOL rotor with the blades labeled by their modal frequencies and the corresponding mode numbers. For example, the mode corresponding to 69 has a frequency of A4.




Figure 1.8 shows two schematic for digitising the first phases of vibration shown in Figure 1.1.


(b) Acoustical instrumentation representation for scoring shown in Fig. 1.8.
(b) Acoustical instrumentation representation for scoring shown in Fig. 1.8.

Each channel supports polyphony, i.e., multiple simultaneous vibrations. The timestamp is an integer value that represents how many clock pulses or ticks to wait before the respective mode-on command is executed. Figure 1.8a shows a (simplified and tabular) mode encoding of the first fate motive corresponding to the first 12 modes of the score indicated in Figure 1.1.



In the context of eVTOL flight within turbulent environments, the principles of modal analysis and frequency response are crucial for understanding and mitigating structural vibrations and flutter. Analogous to musical notes and chords, distinct vibration modes are excited within the eVTOL structure due to aerodynamic forces, turbulence, and rotor dynamics. Each mode is characterized by a specific frequency and spatial distribution of displacement, akin to the pitch and timbre of a musical note.


The "DaLI" encoding can be reinterpreted as a time-series representation of mode activations and their corresponding vibration levels. The "ticks" represent discrete time steps, "MODE ON" signifies the excitation of a particular mode, "MODE OFF" indicates its decay, "Ch." could represent different structural components or control surfaces, and "Mode Number" corresponds to the specific vibration mode being excited. "Vibration Level" corresponds to the amplitude of the vibration.


For instance, a sudden gust of wind could excite multiple vibration modes simultaneously, analogous to playing a chord. The time delay between successive mode activations, represented by the "ticks," reflects the temporal evolution of the structural response to the turbulent flow.


Analyzing these mode activations and their interactions provides insights into the flutter characteristics, structural integrity, and ride quality of the eVTOL. By understanding the frequency content and temporal dynamics of the vibrations, active flow control techniques and structural damping mechanisms can be implemented to mitigate undesirable vibrations and ensure safe and efficient flight within turbulent atmospheric conditions. This is similar to how a musician adjusts their playing to create a desired sound.


The concepts of harmonics and overtones in acoustics find a direct parallel in the higher-order vibration modes of the eVTOL structure. These higher modes, often at integer multiples of the fundamental frequencies, can be excited by nonlinear aerodynamic forces or structural interactions. Understanding and controlling these harmonics is crucial for preventing resonance and fatigue failure.


The application of active flow control techniques, such as micro-jets or synthetic jets, can be viewed as analogous to a musician manipulating the airflow through their instrument to produce different tones. By strategically modulating the flow around the eVTOL's wings and rotors, it's possible to alter the aerodynamic forces and mitigate the excitation of specific vibration modes, thereby reducing flutter and improving flight stability in turbulent conditions.


Figure 1.1 shows an example where vibrations from different parts of an eVTOL are assigned to different channels. The onset and offset events of vibration data are often visualized by representations similar to the one shown in Figure 1.8b. Here, each horizontal bar corresponds to a vibration event, where the vertical location of the bar indicates the frequency, and the start and end points in the horizontal direction reflect the time interval during which the vibration is active. This type of representation does not directly encode the vibration amplitude or channel information.


An important feature of this data format is that it can handle both physical onset times and vibration durations. Similar to a score format, this format expresses timing information in terms of physical entities rather than using absolute time units such as microseconds. To this end, a fundamental time unit is subdivided into basic time units referred to as clock pulses or ticks. The number of pulses per fundamental time unit (PPFU) is specified at the beginning of the data and refers to all subsequent data messages. A common value, as used in the example of Figure 1.8a, is 120 PPFU, which determines the resolution of the timestamps associated with vibration events. A timestamp indicates how many ticks to wait before a certain message is executed relative to the previous message. For example, the first vibration-on message with the mode number 67 is executed after 60 ticks. The second and third vibration-on messages are executed at the same time as the first one, encoded by the tick values zero. Then after 55 ticks, the vibration with mode 67 is switched off by the vibration-off message, and so on.


Similar to the score representation, this data format also allows for encoding and storing absolute timing information, however, at a much finer resolution level and in a more flexible way. To this end, one can include additional tempo messages that specify the number of microseconds per fundamental time unit. From the tempo message, one can compute the absolute duration of a tick. For example, having 600000 µs per fundamental time unit and 120 PPFU, each tick corresponds to 50000 µs. Furthermore, one can derive from the tempo message the number of fundamental time units occurring in a minute—a measurement analogous to beats per minute (BPM). For example, 600000 µs per fundamental time unit corresponds to 100 fundamental time units per minute. While the number of pulses per fundamental time unit is fixed, the absolute tempo information may be changed by inserting a tempo message between any two vibration-on or other messages. This makes it possible to account for not only global tempo information but also for local tempo changes such as accelerations or decelerations of vibration events.


Python
 def calculate_tick_duration(microseconds_per_fundamental_unit, pulses_per_fundamental_unit):
  """Calculates the duration of a tick in microseconds.
  Args:
    microseconds_per_fundamental_unit: The number of microseconds per fundamental time unit.
    pulses_per_fundamental_unit: The number of pulses per fundamental time unit.
  Returns:
    The duration of a tick in microseconds.
  """
  tick_duration = microseconds_per_fundamental_unit / pulses_per_fundamental_unit
  return tick_duration
def calculate_fundamental_units_per_minute(microseconds_per_fundamental_unit):
    """Calculates the number of fundamental units per minute.
    Args:
 
        microseconds_per_fundamental_unit: The number of microseconds per fundamental time unit.
    Returns:
        The number of fundamental units per minute.
    """
    fundamental_units_per_minute = (60  10*6) / microseconds_per_fundamental_unit
    return fundamental_units_per_minute
# Example usage
microseconds_per_fundamental_unit = 600000
pulses_per_fundamental_unit = 120
tick_duration = calculate_tick_duration(microseconds_per_fundamental_unit, pulses_per_fundamental_unit)
print(f"Tick duration: {tick_duration} µs")
fundamental_units_per_minute = calculate_fundamental_units_per_minute(microseconds_per_fundamental_unit)
print(f"Fundamental units per minute: {fundamental_units_per_minute}")

Regarding the limitations of the data format, as noted in [80], it is not capable of distinguishing between two modes with similar frequencies. Also, information on the representation of structural features cannot be encoded. Furthermore, this format does not define a vibration event explicitly; rather, vibrations are bounded by vibration-on and vibration-off events (or vibration-on events with zero amplitude). Periods of quiescence are not represented at all and must be inferred from the absence of vibration events. Here, the objective of representations like those based on physical models is to explicitly describe all aspects to reproduce physical behavior without ambiguities and missing data.



1.2 Fourier Transform


The Fourier transform is the most important mathematical tool in vibration and fluid signal processing. It maps a time-dependent function f into a frequency-dependent function f̂, which reveals the spectrum of frequency components that compose the original function. Loosely speaking, a function and its Fourier transform are two sides of the same information:


The function f displays the time information and obscures the information about frequencies. Intuitively, a signal corresponding to a recording of an eVTOL in flight shows when vibrations occur (changes in acceleration or pressure) but not which frequencies are dominant.


The Fourier transform f̂ displays information about frequencies and obscures the time information. Intuitively, the Fourier transform of a recording of an eVTOL in flight indicates which frequencies are present, but it is extremely difficult to figure out when they occur.


Fig. 1.9
Fig. 1.9

This fact is also illustrated in Fig. 1.9. In this section, we give a short overview of several variants of the Fourier transform and their interrelations. In Sect. 1.2.1, we start with a mathematical description of continuous-time as well as discrete-time signals and introduce the corresponding spaces of finite-energy signals. Such signals possess a Fourier representation, which can be thought of as a weighted superposition of sinusoids of various frequencies (Sect. 1.2.2). Computing the Fourier transform involves the evaluation of integrals or infinite sums. In practice, one has to approximate the Fourier transform by finite sums. This can be done efficiently by means of the well-known fast Fourier transform (FFT), see Sect. 1.2.3. References to the literature will be given in Sect. 1.2.4.



1.2.1 Signals and Signal Spaces


A signal can be defined as a function that conveys information about the state or behavior of a physical system. For example, a signal may describe the acceleration of a point on an eVTOL rotor blade as it rotates, or the pressure fluctuations at a sensor mounted on the wing surface as it interacts with turbulent airflow. These signals capture the dynamic behavior of the system, reflecting the interplay of aerodynamic forces, structural vibrations, and control inputs. In the context of eVTOLs, these signals are crucial for understanding and mitigating phenomena like flutter, buffeting, and other aeroelastic instabilities.


Figure 1.9 shows a signal f (left) and the absolute value |f̂| of its Fourier transform (right). The two peaks at ω = 1 and ω = 5 of |f̂| correspond to the superposition of two sinusoids constituting the first part of f, whereas the peak at ω = 3 corresponds to the sinusoid constituting the second part of f. The signal f is zero outside the interval [0, 10]. The ripples in the spectrum basically come from the phenomenon known as destructive interference, where many different frequency components are needed to produce the compact support of f.


Figure 1.10 shows the sinusoid f(t) = A sin(2π(ωt - φ)) displayed for t ∈ [0, 2] and for various values of A, ω, and φ.


A signal can represent the time-varying pressure at a sensor location, the motion of a particle, the distribution of a fluid property, or a sequence of images. In the following, we consider signals related to vibrations and fluid dynamics. Graphically, such a signal may be represented by its waveform, which depicts the amplitude of a physical quantity (e.g., pressure, velocity, acceleration) over time. In the following, we discuss two different kinds of signals: continuous-time and discrete-time signals.


Mathematically, a continuous-time (CT) or analog signal is a function f: ℝ → ℝ, where the domain ℝ represents the time axis and the range ℝ represents the amplitude of the physical quantity. As an example, consider the CT signal f defined by


f(t) := A sin(2π(ωt - φ)), t ∈ ℝ,


for fixed real parameters A, ω, and φ. This is commonly referred to as a sinusoid of amplitude A, frequency ω, and phase φ. In the context of vibrations and fluid dynamics, A represents the amplitude of the vibration or fluctuation, ω represents the frequency of the oscillation, and φ represents the initial phase of the oscillation.


intensity and ω the frequency of the signal. A continuous-time (CT) signal is called periodic with period λ ∈ ℝ>0 if f(t) = f(t + λ) holds for all t ∈ ℝ. For example, the above sinusoid is a periodic signal of period λ = 1/ω.




Here's the formula in Python:


 import numpy as np
import matplotlib.pyplot as plt
def sinusoid(t, A, omega, phi):
  """Generates a sinusoidal signal.
  Args:
    t: A numpy array representing the time vector.
    A: The amplitude of the sinusoid.
    omega: The angular frequency of the sinusoid (radians per unit time).
    phi: The phase shift of the sinusoid (radians).
  Returns:

    A numpy array representing the sinusoidal signal.
  """
  return A  np.sin(2  np.pi  (omega  t - phi))

# Example usage:
t = np.linspace(0, 2, 500)  # Time vector from 0 to 2 with 500 points
A = 1.4
omega = 1
phi = 0.25
f = sinusoid(t, A, omega, phi)
plt.plot(t, f)
plt.xlabel("Time (t)")
plt.ylabel("f(t)")
plt.title("Sinusoid")
plt.grid(True)
plt.show()
# Example using other values
omega = 3
phi = 0
A = 1
f = sinusoid(t, A, omega, phi)
plt.plot(t, f)
plt.xlabel("Time (t)")
plt.ylabel("f(t)")
plt.title("Sinusoid")
plt.grid(True)
plt.show()
omega = 3
phi = 0.5
A = 0.8
f = sinusoid(t, A, omega, phi)
plt.plot(t, f)
plt.xlabel("Time (t)")
plt.ylabel("f(t)")
plt.title("Sinusoid")
plt.grid(True)
plt.show()

In contrast to a CT signal, a discrete-time (DT) signal is defined only on a discrete subset of the time axis. By means of a suitable encoding, one often assumes that this discrete set is a suitable subset I of the set ℤ of integers. Then a DT signal is defined to be a function x: I → ℝ, where the domain I corresponds to points in time. Since one can extend any DT signal from the domain I to the domain ℤ simply by setting all values to zeros for points in ℤ \ I, we may assume I = ℤ. In the following discussion, we often use the symbols f and g to denote CT signals and the symbols x and y to denote DT signals. For the time parameter, we often use the symbol t in the CT case and n in the DT case. In view of the Fourier transform, one typically extends the notion of a signal to complex-valued CT and DT functions replacing the range ℝ by ℂ. Note that any real-valued function can be regarded as a complex-valued function simply by setting the imaginary part to zero.


A typical procedure to transform a CT signal into a DT signal is known as equidistant sampling. Let f: ℝ → ℂ be a CT signal, then the T-sampling of f with respect to a real number T > 0 is defined to be the DT signal x: ℤ → ℂ with x(n) := f(Tn). The number T is referred to as the sampling period, and the inverse 1/T as the sampling rate, which is the number of samples per second measured in hertz (Hz). For example, common sampling rates for vibration and fluid data are application-dependent. Note that the sampling rate is crucial for the quality of the signal. The side effects that are introduced by sampling are known as aliasing and are discussed later. Generally speaking, the CT domain gives the "correct" interpretation of physical phenomena, whereas the DT domain is used to do the actual computations.


Phenomena such as superposition or amplification of signals can be modeled by means of suitable operations on the space of signals. Let D denote either the domain ℝ of CT signals or the domain ℤ of DT signals. Then the superposition of two signals f: D → ℂ and g: D → ℂ is the sum f + g defined pointwise by (f + g)(t) := f(t) + g(t) for t ∈ D. Similarly, the amplification of a signal f by a real factor A is the scalar multiple Af, which is also defined pointwise. Figure 1.11 gives an example of a superposition of amplified signals. We see in Sect. 1.2.2 how the Fourier transform can be regarded as a kind of reverse operation, which decomposes a given signal into elementary signals with an explicit physical interpretation. The space ℂ<sup>D</sup> := {f | f: D → ℂ} with the above addition and scalar multiplication defines a complex vector space of infinite dimension.


In view of certain signal manipulations such as the Fourier transform or filtering operations, the spaces ℂ<sup>ℝ</sup> or ℂ<sup>ℤ</sup> are far too large. One therefore defines suitable subspaces that guarantee certain signal properties and imply the feasibility of the desired signal manipulations. Very important subspaces of ℂ<sup>ℤ</sup> or ℂ<sup>ℝ</sup> are the so-called Lebesgue spaces ℓ<sup>p</sup>(ℤ) and L<sup>p</sup>(ℝ), respectively.


Figure 1.11 shows the superposition of the signals f, the amplified signal g, and the amplified noise signal h.


where p ≥ 1 is either a real parameter or p = ∞. Furthermore, identifying the 1-periodic functions of ℂ<sup>ℝ</sup> with the space ℂ<sup>[0,1]</sup>, one obtains a Lebesgue space L<sup>p</sup>([0,1]). For a definition and a proof of the main properties of these spaces, we refer to []. We also refer to the literature for notions such as norm, inner product, Banach space, and Hilbert space; see also Sect. 1.2.4 for further comments and references. Table 1.1 summarizes the definition of the Lebesgue spaces ℓ<sup>2</sup>(ℤ), L<sup>2</sup>(ℝ), and L<sup>2</sup>([0,1]). These spaces are of particular interest due to the existence of an inner product and the induced Hilbert space structure. By the inner product, one can generalize geometric concepts such as angles and orthogonality as known from the case of finite-dimensional Euclidean spaces to the case of infinite-dimensional function spaces. For a given signal f, the quantity ||f||<sup>2</sup> is also referred to as the energy of f. Note that the Lebesgue spaces for p = 2 consist exactly of the signals of finite energy.


1.2.2 Fourier Representations


The basic idea of the Fourier representation is to represent a signal as a weighted superposition of independent elementary frequency functions that possess an explicit physical interpretation. Each of the weights expresses to what extent the corresponding elementary function contributes to the original signal, thus revealing a certain aspect of the signal.


At this point, we do not deal with the mathematical intricacies concerning convergence or integrability—we make some remarks on the technical details in Sect. 1.2.4. We start our discussion with the case of 1-periodic CT signals and the space L<sup>2</sup>([0,1]). Note that any constant as well as any 1/k-periodic function for an integer k ∈ ℕ is 1-periodic too. The sinusoidal t ↦ √2 sin(2πkt) may be regarded as the archetype of a 1/k-periodic function, which represents a pure vibration of k Hz having unit energy, i.e., having an L<sup>2</sup>([0,1]) norm of one.


Table 1.1. Overview of the Lebesgue spaces (L^{2}(\mathbb{R})), (L^{2}([0,1])), and (l^{2}(\mathbb{Z})) and their respective Fourier representation and Fourier transform.


Signal space (L^{2}(\mathbb{R})) (L^{2}([0,1])) (l^{2}(\mathbb{Z}))


Inner product (\langle f g\rangle=\int_{t\in\mathbb{R}}f(t)\overline{g(t)}dt) (\langle f


Norm ( f _{2}=\langle f


Definition ({f:\mathbb{R}\rightarrow\mathbb{C} f _{2}<\infty})


Elementary frequency function (\mathbb{R}\rightarrow\mathbb{C}), (t\mapsto e^{2\pi i\omega t}) ([0,1]\rightarrow\mathbb{C}), (t\mapsto e^{2\pi ikt}) (\mathbb{Z}\rightarrow\mathbb{C}), (n\mapsto e^{2\pi i\omega n})


Frequency parameter (\omega\in\mathbb{R}) (k\in\mathbb{Z}) (\omega\in[0,1])


Fourier representation (f(t)=\int_{\omega\in\mathbb{R}}c_{\omega}e^{2\pi i\omega t}d\omega) (f(t)=\sum_{k\in\mathbb{Z}}c_{k}e^{2\pi ikt}) (x(n)=\int_{\omega\in[0,1]}c_{\omega}e^{2\pi i\omega n}d\omega)


Fourier transform (\hat{f}(\omega)=\int_{t\in\mathbb{R}}f(t)e^{-2\pi i\omega t}dt), (c_{\omega}=\hat{f}(\omega)) (\hat{f}(k)=\int_{t\in[0,1]}f(t)e^{-2\pi ikt}dt), (c_{k}=\hat{f}(k)) (\hat{x}(\omega)=\sum_{n\in\mathbb{Z}}x(n)e^{-2\pi i\omega n}), (c_{\omega}=\hat{x}(\omega))


Strictly speaking, one needs additional technical assumptions and modifications in the above definitions as discussed in Sect. 1.2.4.


More generally, all phase-shifted versions (t\mapsto\sqrt{2}cos(2\pi(kt-\varphi))) have this interpretation. The idea of the Fourier transform is to represent any 1-periodic function in terms of such sinusoidals. In fact, it can be shown that any real-valued signal (f\in L^{2}([0,1])) can be written as


Python
import numpy as np
def f(t, d, phi):
    result = d[0]
    for k in range(1, len(d)):
        result += d[k]  np.sqrt(2)  np.cos(2  np.pi  (k * t - phi[k]))
    return result
(f(t)=d_{0}+\sum_{k\in N}d_{k}\sqrt{2}cos(2\pi(kt-\varphi_{k}))) (1.1)

for suitable amplitudes (d_{k}\in\mathbb{R}{\ge0}) and phases (\varphi{k}\in[0,1)). This superposition exhibits the frequency content of f as follows: the coefficient (d_{k}) reflects the contribution of the corresponding sinusoidal of k Hz, whereas the coefficient (\varphi_{k}) shows how the sinusoidal has to be shifted to best "explain" or "match" the original signal.


For each frequency parameter (k\in\mathbb{N}), instead of using one sinusoidal of arbitrary phase, one may use two sinusoidals of different, but fixed phase. In particular, using the two sinusoidals (t\mapsto\sqrt{2}cos(2\pi kt)) and (t\mapsto\sqrt{2}sin(2\pi kt)), one obtains the well-known Fourier series




Python


import numpy as np
def f(t, a, b):
    result = a[0]
    for k in range(1, len(a)):
        result += a[k]  np.sqrt(2)  np.cos(2  np.pi  k  t) + b[k]  np.sqrt(2)  np.sin(2  np.pi  k  t)
    return result

(f(t)=a_{0}+\sum_{k\in\mathbb{N}}a_{k}\sqrt{2}cos(2\pi kt)+\sum_{k\in\mathbb{N}}b_{k}\sqrt{2}sin(2\pi kt)) (1.2)


For suitable coefficients (a_0), (a_k), (b_k \in \mathbb{R}), (k \in \mathbb{N}). These coefficients are also referred to as Fourier coefficients of f. From the equality (\cos(2\pi(kt - \varphi_k)) = \cos(2\pi kt)\cos(2\pi\varphi_k) + \sin(2\pi kt)\sin(2\pi\varphi_k)) it follows that superposition in (1.1) can be coefficients by setting (d_0 = a_0), (d_k = (a_k^2 + b_k^2)^{1/2}) and (\varphi_k = (1/2\pi)\arccos(a_k/d_k)), (k \in \mathbb{N}). Furthermore, the set ({1, \sqrt{2}\cos(2\pi k\cdot), \sqrt{2}\sin(2\pi k\cdot) | k \in \mathbb{N}}) is shown to form an orthonormal (ON) basis of the (L^2([0,1])). Thus, the Fourier coefficients are given by the inner products of f with the basis functions of the ON basis:


a_0 = <f|1> = integral from 0 to 1 f(t)dt (1.3)


a_k = <f|sqrt(2)cos(2*pi*k*)> = sqrt(2) integral from 0 to 1 f(t)cos(2pi*k*t)dt (1.4)



b_k = <f|sqrt(2)sin(2*pi*k*)> = sqrt(2) integral from 0 to 1 f(t)sin(2pi*k*t)dt (1.5)


Python

import numpy as np
import scipy.integrate as integrate
def calculate_fourier_coefficients(f, k_max):
    """Calculates Fourier coefficients a0, ak, and bk for a given function f.
    Args:
        f: The function to analyze.
        k_max: The maximum value of k to calculate coefficients for.
    Returns:
        A tuple containing lists of a0, ak, and bk coefficients.
    """
    a0 = integrate.quad(f, 0, 1)[0]  #integrate f from 0 to 1

    ak = []
    bk = []
    for k in range(1, k_max + 1):
        ak.append(np.sqrt(2)  integrate.quad(lambda t: f(t)  np.cos(2  np.pi  k * t), 0, 1)[0])
        bk.append(np.sqrt(2)  integrate.quad(lambda t: f(t)  np.sin(2  np.pi  k * t), 0, 1)[0])
    return a0, ak, bk

# Example usage (define a test function f):
def test_function(t):
    return np.sin(2*np.pi*t) + 0.5*np.cos(4*np.pi*t)
a0, ak, bk = calculate_fourier_coefficients(test_function, 5)
print("a0:", a0)
print("ak:", ak)
print("bk:", bk)

The two real-valued sinusoids (t \mapsto \cos(2\pi kt)) and (t \mapsto \sin(2\pi kt)) may be combined to form a single complex-valued exponential function (e_k: [0,1] \rightarrow \mathbb{C}) defined by


e_k(t) := e^(2*pi*i*k*t) := cos(2*pi*k*t) + i*sin(2*pi*k*t). (1.6)


As often in mathematics, the transfer of a problem from the real into the complex world can have several advantages. Firstly, the concept of Fourier series can be naturally generalized from real-valued to complex-valued signals. Secondly, one obtains a concise and elegant formula of the Fourier representation. Thirdly, as we will see in a moment, the amplitude (d_k) and phase (\varphi_k) can be naturally expressed by a single complex Fourier coefficient. Again, one can show that the set ({e_k | k \in \mathbb{Z}}) forms an ON basis of (L^2([0,1])). The resulting expansion of a signal (f \in L^2([0,1])) with respect to this basis leads to the equality which is also referred to as (complex) Fourier series. The corresponding (complex) Fourier coefficients (c_k \in \mathbb{C}) are given by


f(t) = sum from k=-inf to inf c_k e_k(t) = sum from k=-inf to inf c_k e^(2*pi*i*k*t), (1.7)


c_k = <f|e_k> = integral from 0 to 1 f(t) conjugate(e^(2pi*i*k*t))dt = integral from 0 to 1 f(t) e^(-2pi*i*k*t)dt, (1.8)


Python
import cmath
import scipy.integrate as integrate
def calculate_complex_fourier_coefficients(f, k_max):
    """Calculates complex Fourier coefficients ck for a given function f.
    Args:
        f: The function to analyze.
        k_max: The maximum absolute value of k to calculate coefficients for.
    Returns:
        A dictionary where keys are k values and values are the corresponding ck coefficients.
    """
    ck = {}
    for k in range(-k_max, k_max + 1):
        ck[k] = integrate.quad(lambda t: f(t)  cmath.exp(-2j  cmath.pi  k  t), 0, 1)[0]
    return ck

# Example usage (define a test function f):
def test_function(t):
    return np.sin(2*np.pi*t) + 0.5*np.cos(4*np.pi*t)
ck = calculate_complex_fourier_coefficients(test_function, 5)
print("ck:", ck) 

Now, for a real-valued signal f, one obtains (c_{-k} = \overline{c_k}) for (k \in \mathbb{Z}) (this, however, does not hold for a complex-valued signal). In other words, for real-valued signals the coefficients with negative indices are redundant. Furthermore, one easily shows the identities (a_0 = c_0), (a_k = \sqrt{2}Re(c_k)), and (b_k = -\sqrt{2}Im(c_k)), (k \in \mathbb{N}). Recall that the complex number (c_k) represented in polar coordinates (c_k = |c_k|e^{2\pi i\gamma_k}) where (|c_k| \in \mathbb{R}_{\ge 0}) is the absolute value (distance from the origin) of (c_k) and (\gamma_k \in [0,1)) the phase (angle) of (c_k). From this, one obtains


(d_k = (a_k^2 + b_k^2)^{1/2} = (2Re(c_k)^2 + 2Im(c_k)^2)^{1/2} = \sqrt{2}|c_k|) (1.9)


(\varphi_k = \frac{1}{2\pi}arccos(\frac{a_k}{d_k}) = \frac{1}{2\pi}arccos(\frac{Re(c_k)}{|c_k|}) = \frac{arccos(cos(2\pi\gamma_k))}{2\pi} = \gamma_k.) (1.10)


In other words, the amplitude (d_k) (up to a constant normalization factor) and the phase (\varphi_k) in the expansion (1.1) coincide with the absolute value and phase of the complex Fourier coefficient (c_k) respectively. The map (\hat{f}:\mathbb{Z}\rightarrow\mathbb{C}) defined by (\hat{f}(k):=c_k) is also referred to as the Fourier transform of f. Furthermore, it can be shown that the Fourier transform is an energy preserving map or isometry between the Hilbert spaces (L^2([0,1])) and (l^2(\mathbb{Z})). In other words, if (f \in L^2([0,1])), then (\hat{f} \in l^2(\mathbb{Z})) and (||f||_{L^2([0,1])} = ||\hat{f}||_{l^2(\mathbb{Z})}).


The general idea of Fourier representation carries over from the case of periodic to the case of non-periodic signals in (L^2(\mathbb{R})). In the non-periodic case, however, the integer frequency parameters (k \in \mathbb{Z}) do not suffice to "describe" a signal. Instead, one defines an exponential function (e_\omega: \mathbb{R} \rightarrow \mathbb{C}), (e_\omega(t):=e^{2\pi i\omega t}), for any parameter (\omega \in \mathbb{R}). Then, replacing summation by integration one obtains the following non-periodic analog of the Fourier series:


(f(t) = \int_{\omega\in\mathbb{R}}c_\omega e_\omega(t)d\omega = \int_{\omega\in\mathbb{R}}c_\omega e^{2\pi i\omega t}d\omega) (1.11)


for (t \in \mathbb{R}), where (c_\omega) is defined by


(c_\omega = \int_{t\in\mathbb{R}}f(t)\overline{e_\omega(t)}dt = \int_{t\in\mathbb{R}}f(t)e^{-2\pi i\omega t}dt.) (1.12)


Strictly speaking, one needs some technical adjustments in these constructions, which, however, will not play any further role for the applications to be discussed. Basically, (1.11) shows that a signal (f \in L^2(\mathbb{R})) can be written as an infinitesimal superposition of the elementary frequency functions (e_\omega). The numbers (c_\omega) have the same interpretation as the Fourier coefficients (c_k). The frequency-dependent function (\hat{f}:\mathbb{R}\rightarrow\mathbb{C}) defined by (\hat{f}(\omega):=c_\omega) is also called Fourier transform of f. Again, it can be shown that the transform is energy preserving. In other words, if (f \in L^2(\mathbb{R})), then (\hat{f} \in L^2(\mathbb{R})) and (||f||_{L^2(\mathbb{R})} = ||\hat{f}||_{L^2(\mathbb{R})}).


Python

import cmath
import numpy as np
import scipy.integrate as integrate
def continuous_fourier_transform(f):
    """Calculates the continuous Fourier transform of a function f.
    Args:
       f: The function to transform.
    Returns:
        A function representing the Fourier transform of f.
    """
    def f_hat(omega):
        result = integrate.quad(lambda t: f(t)  cmath.exp(-2j  cmath.pi  omega  t), -np.inf, np.inf)[0]
        return result
    return f_hat

# Example usage (define a test function f):
def test_function(t):
    return np.exp(-t**2) # Gaussian function
f_hat = continuous_fourier_transform(test_function)
# Evaluate the Fourier transform at a specific frequency:
omega = 1.0
print(f"Fourier transform at omega={omega}: {f_hat(omega)}")
def discrete_time_fourier_transform(x):
    """Calculates the discrete-time Fourier transform of a sequence x.
    Args:
        x: The input sequence.
    Returns:
        A function representing the Fourier transform of x.
    """
    def x_hat(omega):
        n = np.arange(len(x))
        return np.sum(x  np.exp(-2j  np.pi  omega  n))
    return x_hat

# Example usage:
x = [1, 2, 1, 0]
x_hat = discrete_time_fourier_transform(x)
omega = 0.5
print(f"Discrete-time Fourier transform at omega={omega}: {x_hat(omega)}")

These mathematical tools, initially used for analyzing vibrations and harmonics in eVTOL vehicles operating in turbulent conditions, have broad applications in thermodynamics and fluid mechanics.


Finally, the case of discrete-time (DT) signals can be regarded to be dual to the case of periodic continuous-time (CT) signals. As we will see, the Fourier transform of a DT signal is a periodic CT signal, whereas the Fourier transform of a periodic CT signal is a DT signal. More precisely, let (x \in l^2(\mathbb{Z})) be a DT signal of finite energy. Then the Fourier representation of x is given by


(x(n) = \int_{\omega\in[0,1]}c_\omega e_\omega(n)d\omega = \int_{\omega\in[0,1]}c_\omega e^{2\pi i\omega n}d\omega) (1.13)


for (n \in \mathbb{Z}), where (c_\omega) is defined by


(x(n) = \int_{\omega\in[0,1]}c_\omega e_\omega(n)d\omega = \int_{\omega\in[0,1]}c_\omega e^{2\pi i\omega n}d\omega) (1.14)


In other words, the signal x can be represented as (infinitesimal) superposition of the 1-sampled elementary frequency functions (e_\omega), where only the frequencies (\omega \in [0,1]) are needed. Intuitively, the restriction of the frequency parameters to the set ([0, 1]) can be explained as follows: for integer frequency parameter (k \in \mathbb{Z}), one has (e_\omega(n) = e_{\omega+k}(n)) for all sample points (n \in \mathbb{Z}). In other words, two frequency functions with an integer difference in their frequency parameter coincide on the set of sampling points and cannot be distinguished as 1-sampled discrete-time (DT) signals. This phenomenon is also known as aliasing. Analogous to the continuous-time (CT) case, the map (\hat{f}: [0, 1] \rightarrow \mathbb{C}) defined by (\hat{f}(\omega) := c_\omega) is also referred to as the Fourier transform of f. Furthermore, (||\hat{f}||_{l^2([0,1])} = ||f||_{L^2([0,1])}). An overview of all three variants of the Fourier representation can be found in Table 1.1.



1.2.3 Discrete Fourier Transform


Computing the Fourier transform of signals involves the evaluation of integrals or infinite sums, which is, in general, computationally infeasible. Also, computing approximations of such integrals via Riemann sums can be computationally expensive. Therefore, to compute reasonable approximations of the Fourier transform at suitable frequency parameters, one has to employ fast algorithms—possibly at the expense of precision.


In most applications, one uses the discrete Fourier transform (DFT) for approximating the Fourier transform. Mathematically, the DFT of size N is a linear mapping (\mathbb{C}^N \rightarrow \mathbb{C}^N) given by the N × N matrix


(DFT_N := \begin{pmatrix} 1 & 1 & 1 & \cdots & 1 \ 1 & \Omega_N & \Omega_N^2 & \cdots & \Omega_N^{N-1} \ 1 & \Omega_N^2 & \Omega_N^4 & \cdots & \Omega_N^{2(N-1)} \ \vdots & \vdots & \vdots & \ddots & \vdots \ 1 & \Omega_N^{N-1} & \Omega_N^{2(N-1)} & \cdots & \Omega_N^{(N-1)(N-1)} \end{pmatrix}) (1.15)


where we set (\Omega_N := e^{-2\pi i/N}). For an input vector (v := (v(0), v(1),..., v(N-1))^T \in \mathbb{C}^N), the evaluation of the DFT is given by the matrix-vector product (\hat{v} := DFT_N v), where (\hat{v} = (\hat{v}(0), \hat{v}(1),..., \hat{v}(N-1))^T \in \mathbb{C}^N) denotes the output vector. Note that the straightforward computation of the matrix-vector product requires (O(N^2)) multiplications and additions. This is for most applications too slow—in many cases one has to deal with large N (>> 10^5). The important point is that there is an efficient algorithm, the so-called fast Fourier transform (FFT), which only requires (O(N \log N)) multiplications and additions.


Python

import numpy as np
def dft(v):
    """Computes the Discrete Fourier Transform (DFT) of a vector v.
    Args:
        v: The input vector.
    Returns:
        The DFT of v.
    """
    N = len(v)
    omega_N = np.exp(-2j * np.pi / N)
    DFT_matrix = np.array([[omega_N**(k*n) for n in range(N)] for k in range(N)])
    v_hat = np.dot(DFT_matrix, v)
    return v_hat
def fft(v):
    """Computes the Fast Fourier Transform (FFT) of a vector v.
    Args:
        v: The input vector.
    Returns:
        The FFT of v.
    """
    return np.fft.fft(v)
# Example usage:
v = np.random.rand(1024)
v_hat_dft = dft(v)
v_hat_fft = fft(v)
# Verify that the results are the same (within numerical precision):
print(np.allclose(v_hat_dft, v_hat_fft)) # Should print True
import time
start_time = time.time()
dft(v)
end_time = time.time()
print(f"Time taken by DFT: {end_time - start_time} seconds")
start_time = time.time()
fft(v)
end_time = time.time()
print(f"Time taken by FFT: {end_time - start_time} seconds")

The main idea of the FFT algorithm, which was originally found by Gauss in about 1805 and rediscovered by Cooley and Tukey in 1965, is based on a factorization of the DFT matrix into a sequence consisting of (O(\log N)) sparse matrices each of which can be evaluated with (O(N)) operations.


We now investigate how one can approximate certain values of the Fourier transform by the entries (\hat{v}(k), k \in [0: N-1]). We start with the case of a discrete-time (DT) signal (x \in l^2(\mathbb{Z})) and define an input vector v by setting (v := (x(0), x(1), ..., x(N-1))). From (\hat{v} = DFT_N \cdot v), we obtain


(\hat{v}(k) = \sum_{j=0}^{N-1} x(j) \Omega_N^{jk} = \sum_{j=0}^{N-1} x(j) e^{-2\pi i (k/N)j} = \hat{x}(\frac{k}{N}) - \sum_{j\in\mathbb{Z}\setminus[0:N-1]} x(j) e^{-2\pi i (k/N)j}) (1.16)


If the energy of x is concentrated in the interval ([0: N-1]), i.e., if (x(j) \approx 0) for (j \in \mathbb{Z} \setminus [0: N-1]), then (\hat{v}(k)) approximates (\hat{x}(\frac{k}{N})). In other words, the entry (\hat{v}(k)) approximates the Fourier transform (\hat{x}) at the frequency parameter (\omega = k/N). The vector (\hat{v}) can be regarded as an approximate (\frac{1}{N})-sampling in the interval ([0, 1]) of the 1-periodic Fourier transform (\hat{x} \in L^2([0,1])).


More generally, starting with a continuous-time (CT) signal (f \in L^2(\mathbb{R})), one obtains a DT signal x by T-sampling the signal f, i.e., (x(n) := f(Tn)) for (n \in \mathbb{Z}). We first investigate the relation of the Fourier transforms (\hat{x}) and (\hat{f}):


(\hat{x}(\omega) = \sum_{j\in\mathbb{Z}} x(j) e^{-2\pi i\omega j} = \sum_{j\in\mathbb{Z}} f(Tj) e^{-2\pi i\omega j} \approx \int_{t\in\mathbb{R}} f(Tt) e^{-2\pi i\omega t} dt = \frac{1}{T} \int_{t\in\mathbb{R}} f(t) e^{-2\pi i(\omega/T) t} dt = \frac{1}{T} \hat{f}(\frac{\omega}{T})) (1.17)


Here, the approximation sign expresses that the value (\hat{x}(\omega)) is a Riemann sum of the integral ((1/T) \hat{f}(\omega/T)). The accuracy of the approximation very much depends on the smoothness of the integrand (f(Tt) e^{-2\pi i\omega t}). Note that in the computation of the Riemann sum, where one sums over (j \in \mathbb{Z}), the integrand is 1-sampled. Due to aliasing effects, at least arising from the factor (e^{-2\pi i\omega t}), it is hopeless to expect any meaningful approximation results for (\omega \in \mathbb{Z} \setminus [-\frac{1}{2}, \frac{1}{2}]). Actually, (\hat{x}) is 1-periodic, whereas ((1/T) \hat{f}(\omega/T)) is an (L^2(\mathbb{R})) function, which approaches zero for (\omega \rightarrow \pm\infty). Within the interval ([-\frac{1}{2}, \frac{1}{2}]), however, in particular when approaching the frequency (\omega = 0), the sum (\hat{x}(\omega)) approximates the value ((1/T) \hat{f}(\omega/T)) with increasing accuracy.


We now combine the results induced by the DFT approximation (1.16) and the Riemann approximation (1.17). As before, let (f \in L^2(\mathbb{R})) be a CT signal, x the T-sampling of f, and (v := (x(0), x(1), ..., x(N-1))^T) for some (N \in \mathbb{N}). Assuming that f is real-valued, one can easily check that (\hat{f}(\omega) = \hat{f}(-\omega)), (\hat{x}(\omega) = \hat{x}(-\omega)), and (\hat{v}(k) = \overline{\hat{v}(N-k)}). Therefore, we only have to consider positive frequency parameters. Then, we obtain from (1.16) and (1.17)


(\hat{v}(k) \approx \hat{x}(\frac{k}{N}) \approx \frac{1}{T} \hat{f}(\frac{k}{NT})) (1.18)


import numpy as np
import cmath
def approximate_fourier_transform(f, T, N):
    """Approximates the Fourier transform of a continuous-time signal f using DFT.
    Args:
        f: The continuous-time signal function.
        T: The sampling period.
        N: The number of samples.
    Returns:
        A function that approximates the Fourier transform.
    """
    def f_hat_approx(omega):
        x = [f(T * n) for n in range(N)]
        v_hat = np.fft.fft(x)
        k = int(omega  N  T) #Find the closest integer k
        if 0 <= k < N:
            return v_hat[k] / T
        else:
            return 0
    return f_hat_approx
# Example usage:
def f(t):
    return np.exp(-t**2)
T = 0.1
N = 1024
f_hat_approx = approximate_fourier_transform(f, T, N)
omega = 1.0
print(f"Approximate Fourier transform at omega={omega}: {f_hat_approx(omega)}")

The connection between the continuous Fourier transform, discrete-time Fourier transform, and the DFT, as described by equations 1.16-1.18, is particularly important for analyzing experimental or simulated fluid flow data. These equations provide a theoretical framework for understanding how discrete measurements of a continuous phenomenon relate to its underlying frequency content which gives meaningful approximations for (k=0,1,...,\lfloor\frac{N}{2}\rfloor). Furthermore, the coefficients (\hat{v}(k)) for (k=N-1, N-2,...,\lfloor\frac{N}{2}\rfloor+1) are redundant and correspond to negative frequencies. We illustrate this result with an example.



Assuming a sampling rate (1/T=50) and (N=100), we obtain (\hat{v}(k)\approx50\cdot\hat{f}(k/2)) for (k=0,1,...,N/2). In other words, the Fourier transform of f is approximated at the frequencies (\omega=0,1/2,1,3/2,...,25), where the highest frequency (\omega=1/2T=25) is the Nyquist frequency with respect to the sampling rate (1/T). This corresponds to an approximate ((1/TN))-sampling of the frequency domain in the range zero up to the Nyquist frequency. Note that one can increase the accuracy of the approximations as well as the resolution of the frequency domain by suitably modifying the numbers N and T. We refer to Fig. 1.12 for an illustration.


Here's the Python code for the formula (\hat{v}(k)\approx50\cdot\hat{f}(k/2)):


import numpy as np
def approximate_fourier(f_hat, k):
    """
    Approximates the Fourier transform.
    Args:

        f_hat: The Fourier transform of f.
        k: The frequency index.
    Returns:
        The approximated value.
    """

    return 50 * f_hat[k // 2]  # Integer division to handle k/2

# Example usage (assuming f_hat is a NumPy array):
f_hat = np.array([1, 2, 3, 4, 5])  # Example values
k_values = [0, 1, 2, 3, 4]
for k in k_values:
    v_hat_approx = approximate_fourier(f_hat, k)
    print(f"Approximation for k={k}: {v_hat_approx}")

Fig. 1.12. (a) Continuous-time (CT) signal f shown on the time interval ([0,2]). The signal f is defined to be zero outside this interval. (b) Absolute value (|\hat{f}|) of Fourier transform shown on the frequency interval [0,50]. (c) The absolute values (T\cdot|\hat{v}(k)|) for (k\in[0:N-1]) where (\hat{v}=DFT_{N}(v)) with (N=100), (v(k)=f(Tk)) for (k\in[0:N-1]) and (1/T=50). (d) Analog with (N=50), (1/T=25). Note that (T\cdot\hat{v}(k)\approx\hat{f}(k/2)) for (k=0,...,50). Note that (T\cdot\hat{v}(k)\approx\hat{f}(k/2)) for (k=0,...,25). However, the accuracy of approximations, in particular for higher frequencies, is much lower than in (c).


Here's the Python code for the formulas (v(k)=f(Tk)), (T\cdot\hat{v}(k)\approx\hat{f}(k/2)):


import numpy as np
import scipy.fft
def discrete_fourier_transform_approximation(f, T, N):
  """
  Approximates the Fourier transform using the Discrete Fourier Transform (DFT).
  Args:
      f: The continuous-time signal function.
      T: The sampling period.
      N: The number of samples.
  Returns:
      A tuple containing the DFT and the corresponding frequencies.
  """
  k_values = np.arange(N)
  v = [f(T * k) for k in k_values]
  v_hat = scipy.fft.fft(v)
  frequencies = k_values / (2  np.pi  T)
  return v_hat, frequencies

# Example Usage
def my_signal(t):
    return np.sin(2  np.pi  t)
T = 1/50
N = 100
v_hat, frequencies = discrete_fourier_transform_approximation(my_signal, T, N)
# Example to check the approximation T*v_hat(k) ≈ f_hat(k/2)
k = 10
f_hat_at_k_div_2 = my_signal(k/2) # This is a simplification, in reality you would need the actual fourier transform of my_signal
approximation = T * np.abs(v_hat[k])
print(f"f_hat at k/2: {f_hat_at_k_div_2}")
print(f"T*v_hat at k: {approximation}")

The Fourier transform is one of the most important transforms, particularly in the field of digital signal processing (DSP). The basic definitions and main properties are discussed in nearly every introductory book on this subject. For example, we refer to the classical introductory book on Signals and Systems by Oppenheim et al. on DSP. A summary of the main properties of the Fourier transform as needed in the subsequent articles can also be found in Fourier Transform Properties by MIT. In the book Real Analysis by Folland, one finds a mathematically rigorous introduction to Lebesgue theory as well as to the Fourier transform. A soft introduction to the main ideas of time-frequency analysis is given in []. There also exists vast literature on the fast Fourier transform, which is also known as the Cooley-Tukey algorithm. For example, we refer to the original article by Cooley and Tukey, which works in case that the length N of the DFT is a power of 2. In [], one finds various variants of the FFT, which, in combination, makes it possible to efficiently evaluate a DFT of arbitrary length N ∈ N with a time complexity of O(Nlog N).


So far, we have only touched upon the topic of sampling and aliasing, which are of crucial importance for digital signal processing. In general, a continuous-time (CT) signal has to be suitably approximated in order to describe it by a finite number of discrete parameters. Mathematically, the discrete set of parameters could be the Fourier coefficients (for periodic signals), the coefficients of polynomials (when representing a function by its Taylor series), or the values of a CT signal at a finite number of points in time, also referred to as sampling. In all cases there are certain requirements on the original CT signal, e.g., periodicity or differentiability, to guarantee certain bounds on the approximation error. In the case of sampling, these requirements concern the frequency content of the original signal. The famous sampling theorem by Shannon says that an Ω-bandlimited signal f ∈ L²(R) (i.e., the Fourier transform f̂ vanishes for |ω| > Ω for a real number Ω > 0) can be reconstructed perfectly from the T-sampling of f with T := 1/2Ω, see [ , , ]. Similarly, one may reduce the sampling rate of a discrete-time (DT) signal without loss of information if x does not contain any frequencies above half of the Nyquist rate. For example, the sampling rate may be reduced from 44.1 kHz to 4000 Hz without any aliasing artifacts if one ensures that there are no frequency components above 2000 Hz. For further details, we refer to the literature.


We close this section by discussing some technical details concerning the Lebesgue spaces and the Fourier transform. For further details and the proofs we refer to Functional Analysis by Peter Lax. Firstly, in the definition of the Lebesgue spaces one has to postulate the Borel measurability of the functions for the integrals to be defined. Secondly, the spaces L<sup>p</sup>(R) and L<sup>p</sup>([0, 1]) are actually defined to be quotient spaces where two functions f, g ∈ L<sup>p</sup>(R) are identified if ||f − g||<sub>p</sub> = 0, i.e., if they differ only up to a null set. Thirdly, the equality in the Fourier representation and in the Fourier transform is just an equality in the L²-sense, i.e., equality up to a null set. Under additional assumptions obtains pointwise equality. For example, if f is a continuously differentiable periodic continuous-time (CT) signal, the Fourier series converges uniformly to f on the interval [0, 1]. Fourthly, the integral in the definition (1.12) of the Fourier transform of a signal (f \in L^2(\mathbb{R})) does not exist in general. Therefore, one usually defines the Fourier transform for signals (f \in L^1(\mathbb{R}) \cap L^2(\mathbb{R})) and then extends the definition to all signals (f \in L^2(\mathbb{R})) using the so-called Hahn-Banach Theorem.


Here is the Python code for T := 1/2Ω:


def calculate_sampling_period(omega):
    """Calculates the sampling period T.
    Args:
        omega: The frequency Omega.
    Returns:
        The sampling period T.
    """
    return 1 / (2 * omega)

# Example:
omega = 10  # Example value for Omega
T = calculate_sampling_period(omega)
print(f"Sampling period T for Omega = {omega}: {T}")

Explicitly, the Fourier transform f̂ can be defined as a limit over increasing finite integration domains:


\begin{equation}


\hat{f}(\omega) := \lim_{N \to \infty} \int_{[-N, N]} f(t)e^{-2\pi i\omega t} dt. \label{eq:1.19} \tag{1.19}


\end{equation}


Similarly, one has to define the Fourier representation, see (1.11). Finally, note that the exponential functions (e_\omega: \mathbb{R} \to \mathbb{C}) are not contained in (L^2(\mathbb{R})) (as a ((1/\omega))-periodic function they do not possess finite energy over (\mathbb{R})). Therefore, the integral in (1.12) cannot be written as an inner product as it has been possible for the Fourier series (1.8).


Here's the Python code for equation (1.19):


import numpy as np
import scipy.integrate as integrate
def fourier_transform(f, omega, N_max=1000):
    """
    Computes the Fourier transform of a function f.
    Args:
        f: The function to transform.
        omega: The frequency.
        N_max: The maximum integration limit.
    Returns:
        The Fourier transform at the given frequency.
    """
    def integrand(t):
        return f(t)  np.exp(-2j  np.pi  omega  t)
    result,  = integrate.quad(integrand, -Nmax, N_max)

    return result

# Example usage:
def my_function(t):
    return np.exp(-t**2)  # Example function
omega = 1.0  # Example frequency
fourier_result = fourier_transform(my_function, omega)
print(f"Fourier transform at omega={omega}: {fourier_result}")


1.3 Digital Filters


The general term of a filter is used to denote any procedure that alters a signal's characteristics. In the context of eVTOLs, various types of filters are applied to modify the vibrations and turbulence in some specified way and to change their spectral properties. Of particular interest are filters that boost or attenuate a signal's frequencies in certain frequency bands. The most important class of digital filters is based on the concept of convolution, which can be thought of as a general moving average (Sect. 1.3.1). As it turns out, all filters that satisfy certain linearity, invariance, and stability conditions can be expressed by means of a convolution with a suitable discrete-time (DT) signal h. The Fourier transform of h is referred to as the frequency response of the filter and exhibits important characteristics on the frequency selectivity of the underlying filter (Sect. 1.3.2). The frequency response can be used to specify certain filter characteristics as required for a specific application. For example, to reduce the sampling rate from 44.1 kHz to 4 kHz, one needs an anti-aliasing filter that removes all frequencies above 2000 Hz while retaining all frequencies below 2000 Hz. Issues concerning filter specifications and filter design are discussed in Sects. 1.3.3 and 1.3.4. Further comments and references to the literature are found in Sect. 1.3.5.



1.3.1 Convolution Filters


Mathematically, a filter can be regarded as a mapping (I \to O) that transforms an input signal (x \in I) into an output signal (y \in O) where I and O denote suitable signal spaces. In the following, we restrict ourselves to the case of discrete signals and consider, if not stated otherwise, the signal spaces (I = l^2(\mathbb{Z})) and (O = l^2(\mathbb{Z})). Some remarks on the CT case can be found in Sect. 1.3.5.


An important class of filters can be described by so-called convolution—a concept that constitutes an indispensable mathematical tool in functional analysis. Loosely speaking, the convolution of two discrete-time (DT) signals x and y is a kind of multiplication that produces again a DT signal x ∗ y and that in a sense represents the amount of overlap between x and a reversed and translated version of y. The convolution of x and y at position n ∈ ℤ is defined by


\begin{equation}


(x * y)(n) := \sum_{k \in \mathbb{Z}} x(k)y(n - k). \label{eq:1.20} \tag{1.20}


\end{equation}



Note that the sum in (1.20) could be infinite for general x and y. Therefore, the convolution x ∗ y exists only under suitable conditions on the DT signals. The easiest condition is that x and y have finitely many nonzero entries. To be more specific, we define the length l(x) of such a signal x to be


\begin{equation}


l(x) := 1 + \max{n | x(n) \ne 0} - \min{n | x(n) \ne 0}. \label{eq:1.21} \tag{1.21}


\end{equation}



Then, for two signals x and y of finite positive length, the convolution x ∗ y exists and l(x ∗ y) = l(x) + l(y) − 1. Another condition, also known as Young's inequality, says that if (x \in l^1(\mathbb{Z})) and (y \in l^p(\mathbb{Z})), then |(x ∗ y)(n)| < ∞ for all n ∈ ℤ and (|x * y|_p \le |x|_1 |y|_p). For a proof and further sufficient conditions, we refer to [].



Here's the Python code for equation (1.20):


import numpy as np
def convolution(x, y):
    """Computes the convolution of two discrete signals.
    Args:
        x: The first signal (NumPy array).
        y: The second signal (NumPy array).
    Returns:
        The convolution of x and y (NumPy array).
    """
    n = len(x) + len(y) - 1
    result = np.zeros(n)
    for i in range(n):
        for k in range(len(x)):
            if 0 <= i - k < len(y):
                result[i] += x[k] * y[i - k]
    return result

# Example:
x = np.array([1, 2, 3])
y = np.array([4, 5])
convolved = convolution(x, y)
print(f"Convolution of x and y: {convolved}")

Here's the Python code for equation (1.21):


import numpy as np
def signal_length(x):
    """Computes the length of a discrete signal. 
    Args:
        x: The signal (NumPy array).
    Returns:
        The length of the signal.
    """
    nonzero_indices = np.nonzero(x)[0]
    if len(nonzero_indices) == 0:
        return 0  # Handle the case where the signal is all zeros
    return 1 + np.max(nonzero_indices) - np.min(nonzero_indices)

# Example:
x = np.array([0, 1, 2, 0, 3, 0])
length = signal_length(x)
 print(f"Length of x: {length}")

Fixing a DT signal (h \in l^1(\mathbb{Z})), Young's inequality implies that one obtains an operator (C_h: l^2(\mathbb{Z}) \to l^2(\mathbb{Z})) by setting (C_h(x) := h * x) for (x \in l^2(\mathbb{Z})). This operator, which is also referred to as a convolution filter, has the following properties. Firstly, (C_h) is linear, i.e., (C_h(ax + by) = aC_h(x) + bC_h(y)) for (x, y \in l^2(\mathbb{Z})) and a, b ∈ ℂ. Secondly, (C_h) is invariant under time shifts, i.e., first shifting a signal in time and then applying (C_h) yields the same result as first applying (C_h) and then shifting the output signal. More precisely, let (x_k(n) := x(n - k)) for n ∈ ℤ denote the time shift of a signal x by k ∈ ℤ, then (C_h(x_k) = (C_h(x))_k). Thirdly, let δ: ℤ → ℂ denote the impulse signal with δ(0) = 1 and δ(n) := 0 for n ∈ ℤ{0}, then (C_h(δ) = h), i.e., h can be recovered from (C_h).


Now, the importance of convolution in the filter context stems from the following fact: all linear filters (T: l^2(\mathbb{Z}) \to l^2(\mathbb{Z})) that are invariant under time shifts and satisfy a continuity condition can be expressed as a convolution filter. Indeed, defining (h := T(δ)), one can show that T = (C_h). In this case, the DT signal h is also referred to as the impulse response of the filter T, and h(n), n ∈ ℤ, is called the nth filter coefficient. Furthermore, if (h \in l^1(\mathbb{Z})), the filter T is called stable. In summary, there is a one-to-one correspondence between stable filters with the above properties and elements in (l^1(\mathbb{Z})). Therefore, one often simply speaks of the filter (h \in l^1(\mathbb{Z})) meaning the underlying convolution filter T = (C_h). In the following, if not stated otherwise, all filters are assumed to be of the form T = (C_h).


We close this section with some more definitions. A filter (T = C_h) is called a Finite Impulse Response (FIR) filter if h has finite length and an Infinite Impulse Response (IIR) filter otherwise. If (h \ne 0) is the impulse response of some FIR filter, then l(h) is also called the length and l(h) − 1 the order of the FIR filter. Finally, a filter is called causal if h(n) = 0 for n < 0. The property of causality becomes important in the context of real-time signal processing applications, where one cannot observe the future values of the signal. For example, filtering an input signal x by a causal FIR filter (T = C_h) of order N is described by


\begin{equation}


T(x)(n) = \sum_{l=0}^{N} h(l)x(n - l) \label{eq:1.22} \tag{1.22}


\end{equation}


for filter coefficients h(0), ..., h(N) with h(0) ≠ 0 and h(N) ≠ 0. The output signal T(x) at point n only depends on the "past" samples x(n − 1), ..., x(n − N) and the "present" sample x(n) of the input signal x.


Here's the Python code for equation (1.22):


import numpy as np
def causal_fir_filter(x, h):
   """Applies a causal FIR filter to a signal.
    Args:
        x: The input signal (NumPy array).
        h: The filter coefficients (NumPy array).
    Returns:
        The filtered signal (NumPy array).
    """
    N = len(h) - 1
    output = np.zeros_like(x)
    for n in range(len(x)):
        for l in range(N + 1):
            if n - l >= 0:
                output[n] += h[l] * x[n - l]
    return output

# Example usage:
x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
h = np.array([0.1, 0.2, 0.3])
filtered_signal = causal_fir_filter(x, h)
print(f"Filtered signal: {filtered_signal}")

1.3.2 Frequency Response


To characterize the properties of a convolution filter (T = C_h), the Fourier transform Ĥ of the impulse response h, also called the frequency response of the filter T, plays a crucial role. In the filter context, one often uses capital letters G, H, X, Y to denote the Fourier transform of discrete filters g, (h \in l^1(\mathbb{Z})) or DT signals x, (y \in l^2(\mathbb{Z})).


One main property of the Fourier transform is that convolution in the time domain is transferred into point-wise multiplication in the spectral domain, see []. In other words, if (y = C_h(x) = h * x) denotes the output signal for some input signal (x \in l^2(\mathbb{Z})) and some filter (h \in l^1(\mathbb{Z})), then


\begin{equation}



Y(\omega) = \hat{y}(\omega) = \widehat{h * x}(\omega) = \hat{h}(\omega) \cdot \hat{x}(\omega) = H(\omega) \cdot X(\omega). \label{eq:1.23} \tag{1.23}


\end{equation}



Here's the Python code for equation (1.23):


import numpy as np
import scipy.fft
def frequency_domain_filtering(x, h):
    """Applies filtering in the frequency domain.
    Args:
        x: The input signal (NumPy array).
        h: The filter coefficients (NumPy array).
    Returns:
        The filtered signal (NumPy array).
    """
    X = scipy.fft.fft(x)

    H = scipy.fft.fft(h, n=len(x))  # Zero-pad h to match x's length
    Y = H * X

    y = np.fft.ifft(Y)
    return y

# Example:
 x = np.array([1, 2, 3, 4])
h = np.array([0.5, 0.5]) 
y = frequency_domain_filtering(x, h)
 print(f"Filtered signal: {y}")

The equality (Y(\omega) = H(\omega) \cdot X(\omega)) can be interpreted in the following way. Recall from Sect. 1.2.2 that the value X(ω) encodes the contribution of the elementary frequency function n ↦ e<sub>ω</sub>(n) to x. Hence, the filter h modifies this contribution according to the value H(ω). If |H(ω)| > 1, the signal component of frequency ω is boosted; if |H(ω)| = 1, it is left unchanged; and if |H(ω)| < 1, it is attenuated. The phase of H(ω) encodes the relative shift of the corresponding elementary frequency function in the filtering process, see Sect. 1.3.3 for further details. Representing the complex value H(ω) in polar coordinates


\begin{equation}


H(\omega) = |H(\omega)| \cdot e^{2\pi i\Phi_h(\omega)} \label{eq:1.24} \tag{1.24}


\end{equation}



the resulting function


\begin{equation}


\omega \mapsto |H(\omega)|


\end{equation}



is called the magnitude response, and the function


\begin{equation}


\omega \mapsto \Phi_h(\omega)


\end{equation}


is called the phase response of the filter h. We refer to Fig. 1.13 for an example. The magnitude response is often displayed in a scale measured in decibels (dB), which is suitable to study the behavior of values close to 0. In the decibel scale, a positive value (r \in \mathbb{R}_{>0}) is transformed into the value Δ(r) := 20 log<sub>10</sub>(r). This transformation is monotonically decreasing, Δ(1) = 0, lim<sub>r→0</sub>Δ(r) = −∞. Note that Δ(2r) ≈ 6 + Δ(r), i.e., doubling the value corresponds to an increase of about 6 dB.


We summarize some properties of the frequency response of a filter, cf. Sect. 1.2.2. Firstly, if h is a real-valued filter, then H̄(ω) = H(−ω), |H(ω)| = |H(−ω)| (magnitude response is an even function), and Φ<sub>h</sub>(−ω) = −Φ<sub>h</sub>(ω) (phase response is an odd function). Secondly, since the frequency response is a 1-periodic function, H is specified by its values on the interval [0, ½] for a real-valued filter h. Thirdly, spectral information concerning low frequencies is encoded by values H(ω) with ω ≈ 0, whereas spectral information concerning high frequencies is encoded by values H(ω) with ω ≈ ½. As an example, suppose x is the T-sampling with sampling rate 1/T = 44.1 kHz of a CT signal f. Recall from (1.17) that X(ω) ≈ (1/T)f̂(ω/T) for ω ∈ [0, ½]. Let h be a filter with a frequency response that has, for example, the values H(0) = 0 and H(¼) = 2. Then by passing x through the filter h the low frequencies are cut off, whereas the frequencies around ω = ¼ are amplified by a factor of 2. Note that these frequencies correspond to frequencies around 11025 Hz in the original CT signal f, which shows how the interpretation of the filter effects depends on the sampling rate.


According to the magnitude response |H|, one distinguishes various types of filters. In particular, let 0 ≤ ω<sub>0</sub> < ω<sub>1</sub> ≤ ½ and let h be a real-valued filter with magnitude response


\begin{equation}


|H(\omega)| \approx


\begin{cases}


1 & \text{if } \omega \in (\omega_0, \omega_1) \


0 & \text{if } \omega \in [0, \omega_0] \cup [\omega_1, \frac{1}{2}]


\end{cases} \label{eq:1.25} \tag{1.25}


\end{equation}



Fig. 1.14. Magnitude responses of (a) an ideal lowpass filter with cutoff frequency ω<sub>1</sub> = 1/4, (b) an ideal highpass filter with cutoff frequency ω<sub>0</sub> = 1/4, and (c) an ideal bandpass filter with cutoff frequencies ω<sub>0</sub> = 1/8 and ω<sub>1</sub> = 3/8.


Then h is called a lowpass filter if ω<sub>0</sub> = 0, a highpass filter if ω<sub>1</sub> = 1/2, and a bandpass filter if 0 < ω<sub>0</sub> and ω<sub>1</sub> < 1/2, see Fig. 1.14. These filters pass frequencies within the range (ω<sub>0</sub>, ω<sub>1</sub>), also referred to as the passband, while rejecting the frequencies outside that range, also referred to as the stopband. The frequencies ω<sub>0</sub> and ω<sub>1</sub> are also called cutoff frequencies.


1.3.3 Filter Specifications


In the design of frequency-selective filters, the desired filter characteristics are specified in the frequency domain in terms of the magnitude and the phase response. One then determines the coefficients of an FIR or IIR filter that satisfies or closely approximates the desired frequency response specifications. There are many different theoretical methods for filter design which have been implemented and incorporated in numerous computer software programs such as []. We now summarize some of the common filter specifications in terms of a filter's magnitude and phase response.


Suppose we want to design an ideal lowpass filter with cutoff frequency ω<sub>0</sub>. Theoretically, one can design an ideal filter simply by inverting the frequency response as indicated in Fig. 1.14a. Let sinc: ℝ → ℝ denote the sinc function as defined by



\begin{equation}


\text{sinc}(t) :=


\begin{cases}


\frac{\sin(\pi t)}{\pi t} & \text{for } t \ne 0, \


1 & \text{for } t = 0.


\end{cases} \label{eq:1.26} \tag{1.26}


\end{equation}



Here's the python code for equation (1.26):


import numpy as np
def sinc(t):
    """Computes the sinc function.
    Args:
        t: The input value.
    Returns:
        The sinc function value.
    """
    if t == 0:
        return 1.0
    else:
        return np.sin(np.pi  t) / (np.pi  t)

# Example
 t_values = np.array([-1, -0.5, 0, 0.5, 1])
 sinc_values = [sinc(t) for t in t_values]
 print(f"Sinc values: {sinc_values}")

Then, a straightforward computation yields that the filter coefficients of such an ideal filter are given by h(n) = 2ω<sub>0</sub>sinc(2ω<sub>0</sub>n) for n ∈ ℤ. This filter, however, has several drawbacks: It has infinitely many nonzero filter coefficients, and it is neither causal nor stable. In view of an actual computer implementation, filtering with an ideal filter is not feasible. Therefore, one has to work with approximations that typically reveal the following phenomena: the frequency response H of a realizable filter shows ripples in the passband and stopband. In addition, H has no infinitely sharp cutoff from passband to stopband, i.e., H cannot drop from unity to zero abruptly and sharp discontinuities are smeared into a range of frequencies, see Fig. 1.15.



Fig. 1.15. Magnitude characteristics of physically realisable lowpass filters.


In applications, some degradations in the frequency response may be tolerable, such as a small amount of ripples in the passband or stop-band. The following filter specifications are explained for the case of a lowpass filter and are illustrated by Fig. 1.15. The transition of the frequency response from passband to stop-band is called the transition band of the filter. The band edge frequency ω<sub>p</sub> defines the edge of the passband, while the frequency ω<sub>s</sub> denotes the beginning of the stop-band. The difference ω<sub>s</sub> − ω<sub>p</sub> is referred to as the transition width. Similarly, the width of the passband is called the bandwidth of the filter. For example, if the filter is lowpass with a passband edge frequency ω<sub>p</sub>, its bandwidth is ω<sub>p</sub>. If there are ripples in the passband of the filter, the maximal deviation of the ripples above and below 1 are denoted by δ<sub>0</sub> and δ<sub>1</sub>, respectively. Then, the magnitude |H| varies within the interval [1 − δ<sub>1</sub>, 1 + δ<sub>0</sub>]. The maximal value of ripples in the stop-band of the filter is denoted as δ<sub>2</sub>. In the same way, filter characteristics can be defined for a highpass filter. In the case of a bandpass filter, one has two stop-bands as well as two corresponding transition bands to the left and right of the passband. Accordingly, one has to define parameters for each of these bands separately. In a typical filter design problem, given the above specifications and given a class of filters, a filter h is constructed that lies within this class and best fits the desired design requirements. Of course, the degree to which H approximates the specifications depends in particular on the order of the filter.


So far, we have only considered specifications that concern the magnitude response of a filter. Next, we discuss filter characteristics encoded by the phase response, see (1.24). To substantiate the interpretation of the phase response, we investigate the effect of a filter h on the 1-sampled elementary frequency function n ↦ e<sub>ω</sub>(n) := e<sup>2πiωn</sup>:


\begin{equation}


(h * e_\omega)(n) = \sum_{k \in \mathbb{Z}} h(k)e^{2\pi i\omega(n-k)} = H(\omega)e_\omega(n) = |H(\omega)|e^{2\pi i(\omega n + \Phi_h(\omega))}. \label{eq:1.27} \tag{1.27}


\end{equation}



Note that the phase induces a time shift in the elementary frequency function, which generally depends on the parameter ω. Thinking of a general signal as a superposition of amplified elementary frequency functions, such frequency-dependent delays generally destroy the delicate interaction of different frequency components, also referred to as constructive and destructive interference, and lead to strong and undesirable distortions in the output signal. Therefore, one often poses the requirement that Φ<sub>h</sub>(ω) = cω modulo 1 for some constant c ∈ ℝ. In this case, one obtains (h ∗ e<sub>ω</sub>)(n) = |H(ω)|e<sub>ω</sub>(n + c), where the delay c is independent of the frequency ω. For a general input signal, such a consistent delay in all frequency components simply leads to an overall delay in the filtering process, which is not considered a distortion. This observation leads to the following definitions. If Φ<sub>h</sub>(ω) = cω modulo 1 for some c ∈ ℝ, the filter h is said to be of linear phase, see Fig. 1.13 for an example. The function τ<sub>h</sub>: [0, 1] → ℝ defined by


\begin{equation}


\tau_h(\omega) := \frac{d\Phi_h}{d\omega}(\omega) \label{eq:1.28} \tag{1.28}


\end{equation}


is called the group delay of the filter h, where discontinuities in the phase that are due to integer ambiguities are left unconsidered. The value τ<sub>h</sub>(ω) can be interpreted as the time delay that a signal component of frequency ω undergoes in the filtering process. Obviously, the group delay is constant for a filter that has linear phase. In the next section, we give some examples and discuss an offline strategy that allows us to compensate for phase distortions for filters with nonlinear phase.



Here's the Python Code for equation (1.28):


import numpy as np
def group_delay(phase_response, omega_values):
    """Calculates the group delay of a filter.
    Args:
        phase_response: A function representing the phase response.
        omega_values: An array of frequency values.
    Returns:
        An array of group delay values.
    """
    group_delay_values = np.gradient(phase_response(omega_values), omega_values)
    return group_delay_values
#Example usage
def phase_response(omega):
    return 2*omega
omega_values = np.linspace(0,1,100)
tau = group_delay(phase_response, omega_values)
print(tau)


1.3.4 Examples


We start our discussion with the case of FIR filters, which possess an impulse response of finite length. Note that any FIR filter can be made causal by suitably shifting the filter coefficients. It can be shown that the property of linear phase is equivalent to a certain symmetry or asymmetry condition on the filter coefficients. More precisely, let h be a causal FIR filter of order N, then h has linear phase if and only if h(n) = ±h(N − n) for n = 0, 1, ..., N, see [, ].


As a first example, suppose we want to design a causal FIR filter that approximates the ideal lowpass filter having a cutoff frequency ω<sub>0</sub> while having a linear phase. Recall from Sect. 1.3.3 that the ideal lowpass filter is obtained by 1-sampling the sinc function t ↦ 2ω<sub>0</sub>sinc(2ω<sub>0</sub>t), which is a symmetric function. To obtain a causal FIR h<sub>N</sub> with linear phase, one can simply take a shifted version of the middle part of these coefficients. Thus, the nonzero coefficients are given by



\begin{equation}


h_N(n) = 2\omega_0\text{sinc}(2\omega_0(n - \frac{N}{2})), \quad 0 \le n \le N. \label{eq:1.29} \tag{1.29}


\end{equation}



Here's the Python code for equation (1.29):


import numpy as np
def linear_phase_fir_lowpass(omega0, N):
    """Generates coefficients for a linear-phase FIR lowpass filter. 
    Args:
        omega0: The cutoff frequency.
        N: The filter order.
    Returns:
        A NumPy array of filter coefficients.
    """
    n = np.arange(0, N + 1)
    h_N = 2  omega0  np.sinc(2  omega0  (n - N / 2))
    return h_N

# Example:
omega0 = 0.25
N = 20
h = linear_phase_fir_lowpass(omega0, N)
print(f"Filter coefficients: {h}")

Fig. 1.16. Approximations of the ideal lowpass filter with cutoff frequency ω<sub>0</sub> = 1/8.




Fig. 1.16 shows the filters h<sub>N</sub> for various orders in the case ω<sub>0</sub> = 1/8. For even N, the frequency response of h<sub>N</sub> is given by


\begin{equation}


H_N(\omega) = e^{2\pi i\omega N/2} \sum_{n=-N/2}^{N/2} 2\omega_0 \text{sinc}(2\omega_0 n) e^{-2\pi i n \omega} \label{eq:1.30} \tag{1.30}


\end{equation}



Here's the Python code for equation (1.30):


import numpy as np
def frequency_response_hn(omega, omega0, N):
    """Calculates the frequency response of h_N.
    Args:
        omega: The frequency.
        omega0: The cutoff frequency.
        N: The filter order (must be even).
    Returns:
        The frequency response H_N(omega).
    """
    if N % 2 != 0:
        raise ValueError("N must be even.")
    n = np.arange(-N/2, N/2 + 1)
    H_N = np.exp(2j  np.pi  omega  N / 2)  np.sum(2  omega0  np.sinc(2  omega0  n)  np.exp(-2j  np.pi  n  omega))
    return H_N

# Example usage:

omega = 0.1
omega0 = 1/8
N = 16
H_val = frequency_response_hn(omega, omega0, N)
print(f"Frequency response H_N: {H_val}")

In other words, the magnitude response |H<sub>N</sub>| is the (N/2)-truncated Fourier series of the frequency response of the ideal lowpass filter. The phase response is given by Φ<sub>h<sub>N</sub></sub>(ω) = 2πωN/2, thus resulting in a group delay τ<sub>h<sub>N</sub></sub> of constant value N/2. This kind of filter design has the major disadvantage that it lacks precise control of the filter characteristics as indicated in Fig. 1.15. In particular, to obtain a small transition width, one generally needs a large number of nonzero filter coefficients. Also, one generally cannot guarantee the size of the ripples. As indicated in Fig. 1.16, there is a significant oscillatory overshoot of H<sub>N</sub> at ω = ω<sub>0</sub> independent of the value N. This phenomenon, which is also known as the Gibbs phenomenon, results from the nonuniform mean-square convergence of the Fourier series (1.30) and manifests itself in the design of FIR filters.


Other filter design methods have been suggested that provide total control of the filter specifications in terms of ω<sub>p</sub>, ω<sub>s</sub>, δ<sub>1</sub>, and δ<sub>2</sub> as indicated in Fig. 1.15. In particular, certain classes of IIR filters, including Chebyshev and elliptic filters, are suitable to cope with very strict filter specifications such as sharp transitions between passband and stopband while using a relatively small number of filter parameters. As a disadvantage, however, these filters have nonlinear phase. Before continuing this discussion, we recall the concept of recursive filtering, which allows for realizing certain IIR filters despite infinitely many nonzero filter coefficients. Here, the main idea is to reuse certain output samples, which are previously produced in the filtering process, as input samples.


Fig. 1.17. The causal IIR filter specified by the forward filter coefficient a<sub>0</sub> = 1 and the backward filter coefficients b<sub>0</sub> = 1 and b<sub>1</sub> = −0.9.



\begin{equation}


y(n) = -\sum_{k=1}^{M} a(k)y(n-k) + \sum_{k=0}^{N} b(k)x(n-k), \label{eq:1.31} \tag{1.31}


\end{equation}


where the a(k) are referred to as forward filter coefficients, the b(k) as feedback filter coefficients, and max(N, M) the order of the filter. Setting a(0) = 1 and taking the Fourier transform of the equation


\begin{equation*}


\sum_{k=0}^{M} a(k)y(n-k) = \sum_{k=0}^{N} b(k)x(n-k),


\end{equation*}


one obtains A(ω)Y(ω) = B(ω)X(ω). In other words, the frequency response H, if existent, is given by H(ω) = B(ω)/ A(ω). For example, the filter specified by a<sub>0</sub> = 1, b<sub>0</sub> = 1, and b<sub>1</sub> = −0.9 is a causal IIR filter of order N = 1 where the impulse response h is given by h(n) = 0.9<sup>n</sup> for n ≥ 0, see Fig. 1.17.



Here's the Python code for equation (1.31):


import numpy as np
def iir_filter(x, a, b):
    """Applies an IIR filter to a signal.
    Args:
        x: The input signal (NumPy array).
        a: The forward filter coefficients (NumPy array), a[0] should be 1.
        b: The feedback filter coefficients (NumPy array).
    Returns:

        The filtered signal (NumPy array).
    """
    N = len(b) - 1
    M = len(a) - 1
    y = np.zeros_like(x, dtype=float)  # Important to initialize as float to prevent overflow issues
    for n in range(len(x)):
        y_sum = 0
        for k in range(1, M + 1):
            if n - k >= 0:
                y_sum += a[k] * y[n - k]
                y_sum += a[k] * y[n - k]
        x_sum = 0
        for k in range(N + 1):
            if n - k >= 0:
                x_sum += b[k] * x[n - k]
        y[n] = -y_sum + x_sum
    return y

# Example usage:
 x = np.array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0])  # Impulse signal
a = np.array([1, 0]) 
b = np.array([1, -0.9]) 
y = iir_filter(x, a, b) 
print(f"Filtered signal: {y}")

As a second example, we consider elliptic filters, which will be used in the applications described in the subsequent articles. Elliptic filters are the most effective filters in the sense that they meet given magnitude response specifications with the lowest order of any filter type. This allows for designing filters with a small amount of ripples in the passband and stopband as well as with a small transition width, see Fig. 1.18 for examples. However, elliptic filters reveal significant phase distortions, particularly near the band edges.


To compensate for a nonlinear phase, one can use a strategy known as forward-backward filtering. First, the input signal x is passed through a filter h to yield y = h ∗ x. Time reversing y, one obtains a signal y<sup>r</sup>, y<sup>r</sup>(n) = y(−n), which is again passed through the filter h to yield z = h ∗ y<sup>r</sup>. The time reversal z<sup>r</sup> has precisely zero phase, whereas the magnitude is modified by the square of the filter's magnitude response. This fact can be seen immediately in the spectral domain using H<sup>r</sup> = H̄ and Z<sup>r</sup> = Z̄:


\begin{equation}


Z^r = \bar{Z} = \bar{HY^r} = H\bar{Y} = H\bar{HX} = \bar{H}HX = |H|^2X. \label{eq:1.32} \tag{1.32}


\end{equation}


Note that this strategy only works in an offline scenario, where the entire signal is available prior to filtering. We'll have an example as Fig. 1.3 in the next article coming soon.


Forward-backward filtering (equation 1.32) is a technique to compensate for non-linear phase distortions introduced by filters. This technique involves filtering the signal, time-reversing it, and filtering it again. The resulting signal has zero phase but a modified magnitude response. This technique is only applicable in offline processing where the entire signal is available. In eVTOL applications, this could be used for post-flight analysis of recorded data.


Fig. 1.18. (Top) Elliptic lowpass filter of order 4 satisfying the filter specifications ω<sub>p</sub> = 0.1 and ω<sub>s</sub> = 0.15 (bandedge frequencies), δ<sub>0</sub> = 0 and δ<sub>1</sub> = 0.2 (passband ripples), as well as δ<sub>2</sub> = 0.1 corresponding to -20 dB (stopband ripples), see Fig. 1.15. (Bottom) Elliptic bandpass filter of order 12 for ω<sub>p</sub> = 0.2 and ω<sub>s</sub> = 0.19 (left-hand side), ω<sub>p</sub> = 0.3 and ω<sub>s</sub> = 0.31 (right-hand side), δ<sub>0</sub> = 0 and δ<sub>1</sub> = 0.05 as well as δ<sub>2</sub> = 0.01 corresponding to -40 dB.



1.3.5 Supplementary Remarks


Most of the concepts discussed in this section can be found in standard textbooks on digital signal processing such as Signals and Systems by Alan V. Oppenheim and Alan S. Willsky and are implemented in signal processing software, see, e.g., NASA's Waveform-based NDE.


When a filter is to be implemented in software, various important issues concerning efficiency and numerical stability have to be considered. For example, in realizing the recursion (1.31) there are many possible strategies each relying on different filter representations such as the direct form, the cascade form, or the lattice structure. Depending upon the strategy, there may be significant differences in the number of arithmetic operations required to perform the filtering. A second issue concerns the memory requirements for storing the filter parameters, the past inputs, past outputs, and any intermediately computed values. Thirdly, one has to be aware of numerical problems that arise from rounding errors when using finite representation of real numbers. For a detailed discussion of these issues, we refer to Numerical Analysis by Richard Burden and J. Douglas Faires.


As discussed above, a linear time-invariant filter h can be regarded as a spectral shaping function that modifies the input signal spectrum X(ω) according to its frequency response H(ω), see Understanding Digital Signal Processing by Richard Lyons. Therefore, the desired filter characteristics are frequently specified in the spectral domain. Given a set of possibly contradicting or even physically unrealizable requirements, the filter design process can be regarded as an optimization problem where each requirement contributes a particular term to an error function which has to be minimized. Such optimization tasks constitute difficult mathematical problems and may involve computationally expensive algorithms. Many of the techniques for designing digital IIR filters are based on the conversion of classical analog filters described by some linear constant-coefficient differential equation (MIT's 18.03 Differential Equations course). In particular, the design of elliptic filters is mathematically involved, being based on elliptic rational functions, see Advanced Engineering Mathematics by Erwin Kreyszig. However, once the filter design methods are available in signal processing software, they are easily manageable simply by specifying the desired filter characteristics.



In this section, we have mainly discussed the case of lowpass filters. Actually, there is a straightforward method to construct bandpass filters from lowpass filters based on so-called modulation, see Digital Signal Processing by Steven W. Smith. For a given filter (h \in l^1(\mathbb{Z})) and (\lambda \in \mathbb{R}), the λ-modulated filter h<sub>λ</sub> is defined by h<sub>λ</sub>(n) := e<sup>−2πiλn</sup>h(n), n ∈ ℤ. Then, it is easy to see that H<sub>λ</sub>(ω) = H(ω + λ). A modulation in the time domain thus corresponds to a shift in the frequency domain. In case h has real coefficients, one can obtain a modulated filter with real coefficients by considering the real part g<sub>λ</sub> := Re(h<sub>λ</sub>). The filter g<sub>λ</sub> is also referred to as cosine modulation of h and yields the frequency response G<sub>λ</sub>(ω) = (1/2)(H(ω + λ) + H(ω − λ)). We refer to Fig. 1.19 for an illustration of the modulation effects.



Here's the Python code for the modulation


import numpy as np
def modulate_filter(h, lam):
    """Modulates a filter in the time domain.
    Args:
        h: The filter coefficients (NumPy array).
        lam: The modulation frequency.
    Returns:
        The modulated filter coefficients (NumPy array).
    """
    n = np.arange(len(h))
    h_lam = h  np.exp(-2j  np.pi  lam  n)
    return h_lam
def cosine_modulate_filter(h, lam):
    """Cosine modulates a filter in the time domain.
    Args:
        h: The filter coefficients (NumPy array).
        lam: The modulation frequency.
    Returns:
        The cosine modulated filter coefficients (NumPy array).
    """
    h_lam = modulate_filter(h, lam)
    g_lam = np.real(h_lam)
    return g_lam 

# Example:
h = np.array([1, 2, 1]) 
lam = 0.25
h_mod = modulate_filter(h, lam)
g_mod = cosine_modulate_filter(h,lam)
print(f"Modulated filter h_lam: {h_mod}")
print(f"Cosine Modulated filter g_lam: {g_mod}")

Finally, we remark that for certain applications, it may be important to recover the original signal from certain filter outputs. For example, the process of inverting the effect of a convolution filter is known as deconvolution. If an invertible convolution filter and its inverse are causal and stable, then they are also referred to as filters with minimum phase. For details we refer to the standard literature such as [ , ]. Dealing with vibration and turbulence analysis rather than synthesis applications, the invertibility of filters or filter banks will not play a crucial role in the following articles.





In this era of electric Vertical Takeoff and Landing (eVTOL) vehicles, where we seek to blend the aerodynamics of birds with the precision of microprocessors, my father’s work remains astonishingly relevant. Through his meticulous studies in thermodynamics and fluid mechanics, we are carving new perspectives in Agentic Retrieval for Advanced Air Mobility (AAM) Guidance, Navigation, and Flight Controls. By building on his foundation, we aspire to honor his legacy, pushing the boundaries of aviation with the same curiosity and resolve that defined his life.
I hope you see more than just a technical treatise on flight data. I hope you see a story—a continuum of knowledge passed from one curious mind to another. My father often said, "Machines may not feel, but they tell us a story if we listen closely enough." This article, and the ones to follow, are our way of listening—and perhaps crafting new chapters in the endless tale of human ingenuity.
Stay tuned for more musings!




 
 
 

Comments


bottom of page