Permalink
![256 256](/uploads/1/2/4/7/124747567/721363268.jpg)
![Ofdm Ofdm](/uploads/1/2/4/7/124747567/258273511.png)
Removal for OFDM modulation and demodulation. FFT and Inverse FFT Operations The most computationally intensive operation of OFDM modulation is IFFT, and similarly, the core of OFDM demodulation is FFT. High FFT throughput is essential in broadband systems, especially when FFT is shared between multiple data paths. In modern scalable wireless systems.
Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.
Sign up Find file Copy path
1 contributor
from__future__import division |
import numpy as np |
import scipy.interpolate |
import tensorflow as tf |
import math |
import os |
K =64 |
CP= K//4 |
P =64# number of pilot carriers per OFDM block |
#pilotValue = 1+1j |
allCarriers = np.arange(K) # indices of all subcarriers ([0, 1, ... K-1]) |
pilotCarriers = allCarriers[::K//P] # Pilots is every (K/P)th carrier. |
#pilotCarriers = np.hstack([pilotCarriers, np.array([allCarriers[-1]])]) |
#P = P+1 |
dataCarriers = np.delete(allCarriers, pilotCarriers) |
mu =2 |
payloadBits_per_OFDM =len(dataCarriers)*mu # number of payload bits per OFDM symbol |
payloadBits_per_OFDM = K*mu |
SNRdb =20# signal to noise-ratio in dB at the receiver |
Clipping_Flag =False |
#Clipping_Flag = False |
mapping_table = { |
(0,0) : -1-1j, |
(0,1) : -1+1j, |
(1,0) : 1-1j, |
(1,1) : 1+1j, |
} |
demapping_table = {v : k for k, v in mapping_table.items()} |
defClipping (x, CL): |
sigma = np.sqrt(np.mean(np.square(np.abs(x)))) |
CL=CL*sigma |
x_clipped = x |
clipped_idx =abs(x_clipped) >CL |
x_clipped[clipped_idx] = np.divide((x_clipped[clipped_idx]*CL),abs(x_clipped[clipped_idx])) |
return x_clipped |
defPAPR (x): |
Power = np.abs(x)**2 |
PeakP = np.max(Power) |
AvgP = np.mean(Power) |
PAPR_dB=10*np.log10(PeakP/AvgP) |
returnPAPR_dB |
defModulation(bits): |
bit_r = bits.reshape((int(len(bits)/mu), mu)) |
return (2*bit_r[:,0]-1)+1j*(2*bit_r[:,1]-1) # This is just for QAM modulation |
defOFDM_symbol(Data, pilot_flag): |
symbol = np.zeros(K, dtype=complex) # the overall K subcarriers |
#symbol = np.zeros(K) |
symbol[pilotCarriers] = pilotValue # allocate the pilot subcarriers |
symbol[dataCarriers] = Data # allocate the pilot subcarriers |
return symbol |
defIDFT(OFDM_data): |
return np.fft.ifft(OFDM_data) |
defaddCP(OFDM_time): |
cp =OFDM_time[-CP:] # take the last CP samples ... |
return np.hstack([cp, OFDM_time]) # ... and add them to the beginning |
defchannel(signal,channelResponse,SNRdb): |
convolved = np.convolve(signal, channelResponse) |
signal_power = np.mean(abs(convolved**2)) |
sigma2 = signal_power *10**(-SNRdb/10) |
noise = np.sqrt(sigma2/2) * (np.random.randn(*convolved.shape)+1j*np.random.randn(*convolved.shape)) |
return convolved + noise |
defremoveCP(signal): |
return signal[CP:(CP+K)] |
defDFT(OFDM_RX): |
return np.fft.fft(OFDM_RX) |
defequalize(OFDM_demod, Hest): |
returnOFDM_demod/ Hest |
defget_payload(equalized): |
return equalized[dataCarriers] |
defPS(bits): |
return bits.reshape((-1,)) |
defofdm_simulate(codeword, channelResponse,SNRdb): |
OFDM_data= np.zeros(K, dtype=complex) |
OFDM_data[allCarriers] = pilotValue |
OFDM_time= IDFT(OFDM_data) |
OFDM_withCP= addCP(OFDM_time) |
OFDM_TX=OFDM_withCP |
if Clipping_Flag: |
OFDM_TX= Clipping(OFDM_TX,CR) # add clipping |
OFDM_RX= channel(OFDM_TX, channelResponse,SNRdb) |
OFDM_RX_noCP= removeCP(OFDM_RX) |
# ----- target inputs --- |
symbol = np.zeros(K, dtype=complex) |
codeword_qam = Modulation(codeword) |
symbol[np.arange(K)] = codeword_qam |
OFDM_data_codeword= symbol |
OFDM_time_codeword= np.fft.ifft(OFDM_data_codeword) |
OFDM_withCP_cordword= addCP(OFDM_time_codeword) |
if Clipping_Flag: |
OFDM_withCP_cordword= Clipping(OFDM_withCP_cordword,CR) # add clipping |
OFDM_RX_codeword= channel(OFDM_withCP_cordword, channelResponse,SNRdb) |
OFDM_RX_noCP_codeword= removeCP(OFDM_RX_codeword) |
#OFDM_RX_noCP_codeword = DFT(OFDM_RX_noCP_codeword) |
return np.concatenate((np.concatenate((np.real(OFDM_RX_noCP),np.imag(OFDM_RX_noCP))), np.concatenate((np.real(OFDM_RX_noCP_codeword),np.imag(OFDM_RX_noCP_codeword))))), abs(channelResponse) |
defofdm_simulate_single_without_CP(codeword, channelResponse): |
codeword_qam = Modulation(codeword) |
OFDM_data_codeword= OFDM_symbol(codeword_qam) |
OFDM_time_codeword= np.fft.ifft(OFDM_data_codeword) |
# using a new ofdm symbol for the prefix |
bits = np.random.binomial(n=1, p=0.5, size=(payloadBits_per_OFDM, )) |
codeword_noise = Modulation(codeword) |
OFDM_data_nosie= OFDM_symbol(codeword_noise) |
OFDM_time_noise= np.fft.ifft(OFDM_data_nosie) |
cp =OFDM_time_noise[-CP:] # take the last CP samples ... |
OFDM_withCP_cordword= np.hstack([cp,OFDM_time_codeword]) |
OFDM_RX_codeword= channel(OFDM_withCP_cordword, channelResponse) |
OFDM_RX_noCP_codeword= removeCP(OFDM_RX_codeword) |
#return np.concatenate((np.real(OFDM_RX_noCP_codeword),np.imag(OFDM_RX_noCP_codeword))) , abs(channelResponse) #sparse_mask |
return np.concatenate((np.real(OFDM_RX_noCP_codeword),np.imag(OFDM_RX_noCP_codeword))), abs(channelResponse) |
Pilot_file_name ='Pilot_'+str(P) |
if os.path.isfile(Pilot_file_name): |
print ('Load Training Pilots txt') |
# load file |
bits = np.loadtxt(Pilot_file_name, delimiter=',') |
else: |
# write file |
bits = np.random.binomial(n=1, p=0.5, size=(K*mu, )) |
np.savetxt(Pilot_file_name, bits, delimiter=',') |
pilotValue = Modulation(bits) |
CR=1 |
### Deep Learning Training |
deftraining(): |
# Training parameters |
training_epochs =20 |
batch_size =256 |
display_step =5 |
test_step =1000 |
examples_to_show =10 |
# Network Parameters |
n_hidden_1 =500 |
n_hidden_2 =250# 1st layer num features |
n_hidden_3 =120# 2nd layer num features |
n_input =256# MNIST data input (img shape: 28*28) |
n_output =16#4 |
# tf Graph input (only pictures) |
X = tf.placeholder('float', [None, n_input]) |
Y = tf.placeholder('float', [None, n_output]) |
defencoder(x): |
weights = { |
'encoder_h1': tf.Variable(tf.truncated_normal([n_input, n_hidden_1],stddev=0.1)), |
'encoder_h2': tf.Variable(tf.truncated_normal([n_hidden_1, n_hidden_2],stddev=0.1)), |
'encoder_h3': tf.Variable(tf.truncated_normal([n_hidden_2, n_hidden_3],stddev=0.1)), |
'encoder_h4': tf.Variable(tf.truncated_normal([n_hidden_3, n_output],stddev=0.1)), |
} |
biases = { |
'encoder_b1': tf.Variable(tf.truncated_normal([n_hidden_1],stddev=0.1)), |
'encoder_b2': tf.Variable(tf.truncated_normal([n_hidden_2],stddev=0.1)), |
'encoder_b3': tf.Variable(tf.truncated_normal([n_hidden_3],stddev=0.1)), |
'encoder_b4': tf.Variable(tf.truncated_normal([n_output],stddev=0.1)), |
} |
# Encoder Hidden layer with sigmoid activation #1 |
#layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']), biases['encoder_b1'])) |
layer_1 = tf.nn.relu(tf.add(tf.matmul(x, weights['encoder_h1']), biases['encoder_b1'])) |
layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, weights['encoder_h2']), biases['encoder_b2'])) |
layer_3 = tf.nn.relu(tf.add(tf.matmul(layer_2, weights['encoder_h3']), biases['encoder_b3'])) |
layer_4 = tf.nn.sigmoid(tf.add(tf.matmul(layer_3, weights['encoder_h4']), biases['encoder_b4'])) |
return layer_4 |
y_pred = encoder(X) |
# Targets (Labels) are the input data. |
y_true = Y |
# Define loss and optimizer, minimize the squared error |
cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2)) |
learning_rate = tf.placeholder(tf.float32, shape=[]) |
optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost) |
# Initializing the variables |
init = tf.global_variables_initializer() |
# Start Training |
config = tf.ConfigProto() |
config.gpu_options.allow_growth =True |
# The H information set |
H_folder ='./H_dataset/' |
train_idx_low =1 |
train_idx_high =301 |
test_idx_low =301 |
test_idx_high =401 |
# Saving Channel conditions to a large matrix |
channel_response_set_train = [] |
for train_idx inrange(train_idx_low,train_idx_high): |
H_file = H_folder +str(train_idx) +'.txt' |
withopen(H_file) as f: |
for line in f: |
numbers_str = line.split() |
numbers_float = [float(x) for x in numbers_str] |
h_response = np.asarray(numbers_float[0:int(len(numbers_float)/2)])+1j*np.asarray(numbers_float[int(len(numbers_float)/2):len(numbers_float)]) |
channel_response_set_train.append(h_response) |
channel_response_set_test = [] |
for test_idx inrange(test_idx_low,test_idx_high): |
H_file = H_folder +str(test_idx) +'.txt' |
withopen(H_file) as f: |
for line in f: |
numbers_str = line.split() |
numbers_float = [float(x) for x in numbers_str] |
h_response = np.asarray(numbers_float[0:int(len(numbers_float)/2)])+1j*np.asarray(numbers_float[int(len(numbers_float)/2):len(numbers_float)]) |
channel_response_set_test.append(h_response) |
print ('length of training channel response', len(channel_response_set_train), 'length of testing channel response', len(channel_response_set_test)) |
with tf.Session(config=config) as sess: |
sess.run(init) |
traing_epochs =20000 |
learning_rate_current =0.001#0.01 |
for epoch inrange(traing_epochs): |
print(epoch) |
if epoch >0and epoch%20000: |
learning_rate_current = learning_rate_current/5 |
avg_cost =0. |
total_batch =50 |
for index_m inrange(total_batch): |
input_samples = [] |
input_labels = [] |
for index_k inrange(0, 1000): |
bits = np.random.binomial(n=1, p=0.5, size=(payloadBits_per_OFDM, )) |
channel_response = channel_response_set_train[np.random.randint(0,len(channel_response_set_train))] |
signal_output, para = ofdm_simulate(bits,channel_response,SNRdb) |
input_labels.append(bits[16:32]) |
input_samples.append(signal_output) |
batch_x = np.asarray(input_samples) |
batch_y = np.asarray(input_labels) |
#print(np.asarray(code_words).shape) |
_,c = sess.run([optimizer,cost], feed_dict={X:batch_x, |
Y:batch_y, |
learning_rate:learning_rate_current}) |
avg_cost += c / total_batch |
if epoch % display_step 0: |
print('Epoch:','%04d'% (epoch+1), 'cost=', |
'{:.9f}'.format(avg_cost)) |
input_samples_test = [] |
input_labels_test = [] |
test_number =1000 |
# set test channel response for this epoch |
if epoch % test_step 0: |
print ('Big Test Set ') |
test_number =10000 |
for i inrange(0, test_number): |
bits = np.random.binomial(n=1, p=0.5, size=(payloadBits_per_OFDM, )) |
channel_response= channel_response_set_test[np.random.randint(0,len(channel_response_set_test))] |
signal_output, para = ofdm_simulate(bits,channel_response,SNRdb) |
input_labels_test.append(bits[16:32]) |
input_samples_test.append(signal_output) |
batch_x = np.asarray(input_samples_test) |
batch_y = np.asarray(input_labels_test) |
encode_decode = sess.run(y_pred, feed_dict= {X:batch_x}) |
mean_error = tf.reduce_mean(abs(y_pred - batch_y)) |
mean_error_rate =1-tf.reduce_mean(tf.reduce_mean(tf.to_float(tf.equal(tf.sign(y_pred-0.5), tf.cast(tf.sign(batch_y-0.5),tf.float32))),1)) |
print('OFDM Detection QAM output number is', n_output, 'SNR = ', SNRdb, 'Num Pilot', P,'prediction and the mean error on test set are:', mean_error.eval({X:batch_x}), mean_error_rate.eval({X:batch_x})) |
batch_x = np.asarray(input_samples) |
batch_y = np.asarray(input_labels) |
encode_decode = sess.run(y_pred, feed_dict= {X:batch_x}) |
mean_error = tf.reduce_mean(abs(y_pred - batch_y)) |
mean_error_rate =1-tf.reduce_mean(tf.reduce_mean(tf.to_float(tf.equal(tf.sign(y_pred-0.5), tf.cast(tf.sign(batch_y-0.5),tf.float32))),1)) |
print('prediction and the mean error on train set are:', mean_error.eval({X:batch_x}), mean_error_rate.eval({X:batch_x})) |
print('optimization finished') |
training() |
Copy lines Copy permalink
I read LTE PHY profile and do not understand the relationship between number of FFT and bandwidth. For example LTE downlink channel bandwidth 1.25MHz and 5MHz have 128 FFTs and 512 FFTs accordingly. Another example is 802.11 a bandwidth 20MHz, FFT size 52.
I design an OFDM system system at carrier frequency 910MHz, with FFT size for example 64, how much coherent bandwidth that the system need? What is the formula? What other factors should be taken into account?
Add >> 18 Aug (I don't have enough reputaiton to add comment on Jason R reply)
When I test a simple software defined radio OFDM system with parameters as FFT = 256, data carrier per symbol = 125, number of pilot = 25, bandwidth = 5MHz, carrier frequency = 910MHz, the channel estimation wouks fine. If I change bandwidth to 1MHz, then the channel estimation goes wrong. In this case I reduce sampling frequency 5 times, so carrier spacing is reduced 5 times too.
I think the carrier spacing must be greater than a limit so that intersymbol interference (or inter-carrier interference?) is small enough to not cause channel estimation failture. How to find the carrier spacing limit?
Or delay spread $tau ll T$ where T is symbol interval may cause the problem. How to calculate the limit $tau$?
![256 256](/uploads/1/2/4/7/124747567/721363268.jpg)
Thanks
fucaifucai
$endgroup$1 Answer
$begingroup$The channel bandwidth and FFT size alone don't provide enough information to describe the entire structure of the OFDM signal. Recall the following relationship:
$$Delta f = frac{f_s}{N}$$
where $Delta f$ is the subcarrier spacing, $f_s$ is the sample rate used at the modulator input, and $N$ is the FFT size.
In your LTE examples, the two channel bandwidths must use the same subcarrier spacing, as it takes four times as much bandwidth to carry four times the subcarriers (512 versus 128).
For 802.11a, however, the FFT size is typically 64; with a sample rate of 20 MHz, this yields a subcarrier spacing of 312.5 kHz. Of these 64 potential subcarriers, 12 of them are unused (sometimes referred to as 'virtual carriers'). Of the remaining 52 subcarriers, 4 contain pilot tones, so only 48 subcarriers carry information at any given time.
For your notional OFDM system, you can choose the amount of bandwidth required by your system, as required based on your throughput requirements. Choose the sample rate that you will input to the modulator, then choose a convenient FFT size. This defines your number of subcarriers and therefore the subcarrier frequency spacing.
Jason RJason R![Ofdm Ofdm](/uploads/1/2/4/7/124747567/258273511.png)
21.5k22 gold badges5555 silver badges6565 bronze badges
$endgroup$