Data Analysis¶
Start by importing analysis submodule for Icarus Pressure Jump for NMR .
from icarus_nmr import analysis
Next step is to create an instance of the Dataset class.
dataset = analysis.Dataset()
dataset.folder = 'path/to/the/folder/containing/log/data'
dataset.description = 'you can write your description here.'
dataset.init()
All data is stored in one array(dataset.log_data) and the header for that array is stored in dataset.log_header
dataset.log_header #to see the header
dataset.log_data[:,4] #to access the 4th column in the log data array
The data in the folder can be visualized. There are several build in functions that can be used to visualize data quickly.
dataset.plot_history(type = 'all')
dataset.plot_trace(type = 'pre', index = 1)
The log file has 37 columns * time - Unix time * global_pointer - global pointer in circular buffer where the event occurred * period_idx - index for the period * event_code - the event integer number see XXX for details * pPre_0 - pressure (TODO) * pDepre_0 - (TODO) * pPre_after_0 - (TODO) * pDiff_0 - (TODO) * tSwitchDepressure_0 - (TODO) * tSwitchDepressureEst_0’, * tSwitchPressure_0’, * tSwitchPressureEst_0 - (TODO) * gradientPressure_0 - (TODO) * gradientDepressure_0 - (TODO) * gradientPressureEst_0 - (TODO) * gradientDepressureEst_0 - (TODO) * riseTime_0 - (TODO) * fallTime_0 - (TODO) * pPre_1 - (TODO) * pDepre_1 - (TODO) * pPre_after_1 - (TODO) * pDiff_1 - (TODO) * tSwitchDepressure_1 - (TODO) * tSwitchPressure_1 - (TODO) * gradientPressure_1 - (TODO) * gradientDepressure_1 - (TODO) * fallTime_1 - (TODO) * riseTime_1 - (TODO) * period - (TODO) * delay - (TODO) * pressure_pulse_width - (TODO) * depressure_pulse_width - (TODO) * pump_stroke - number of pump strokes since last reset * depressure_valve_counter - (TODO) * pressure_valve_counter - (TODO) * leak_value - (TODO) * meanbit3 - mean value while bit 3 is ?high?
The Dataset Class¶
- class icarus_nmr.analysis.Dataset(folder=None)[source]¶
- combine_log_entries(raw_data)[source]¶
combines all entries associated with one period in one entry. converts raw log file data to collapsed 2D numpy array where every entry corresponds to one period. The time stamp on the period will be taken from the period event, which is defined elsewhere.
- Parameters
- raw_data: numpy array
raw data from original log file
- Returns
- data: numpy array
compressed data
Examples
>>> folder = '/2019-05-31-13-13-52/' >>> raw_data = genfromtxt(folder + 'experiment.log', delimiter = ',', skip_header = 2) >>> data = dataset.combine_log_entries(raw_data) >>> data.shape
- dump_to_pickle_file(obj=None)[source]¶
pickles data and puts it on the drive. the default name is experiment.pickle (similar to original experiment.log)
- Parameters
- data: numpy array
data to append
Examples
>>> dataset = Dataset() >>> dataset.dump_to_pickle_file(data = data)
- get_trace(period=0, type='')[source]¶
returns the trace numpy array from /buffer_files/
- Parameters
- period: integer
period index number
- type: string
type of buffer file (pre,depre,pump,cooling, etc)
- Returns
- data: numpy array
data to append
Examples
>>> data = dataset.get_trace(period = 2, type = 'pump') >>> data.shape
- log_load_pickle_file(folder)[source]¶
checks if the pickle file exist and loads it
- Parameters
- folder: string
folder name
- Returns
- data: numpy array
data to append
- rawdata: numpy array
data to append
Examples
>>> folder = '/2019-05-31-13-13-52/' >>> data = dataset.log_read_raw_data(folder = folder) >>> data.shape
- log_read_file(folder)[source]¶
converts raw log file data to collapsed 2D numpy array where every entry corresponds to one period. The time stamp on the period will be taken from the period event, which is defined elsewhere.
looks for experiment.log file in the specified folder. Reads it and returns data as numpy array. The typical folder name is /YEAR-MM-DD-hh-mm-ss, where MM - month, DD - day, hh - hours(24 hours), mm-minutes, ss-seconds
- Parameters
- folder: string
folder name
- Returns
- data: numpy array
data to append
Examples
>>> folder = '/2019-05-31-13-13-52/' >>> raw_data, data = dataset.log_read_file(folder = folder) >>> raw_data.shape (980, 37) >>> data.shape (403, 37)
- log_read_header(folder)[source]¶
looks for experiment.log file in the specified folder. Reads it and returns header. The typical folder name is /YEAR-MM-DD-hh-mm-ss, where MM - month, DD - day, hh - hours(24 hours), mm-minutes, ss-seconds
- Parameters
- folder: string
folder name
- Returns
- data: numpy array
data to append
Examples
>>> dataset = Dataset() >>> folder = '/2019-05-31-13-13-52/' >>> header = dataset.log_read_header(folder = folder) >>> header
- plot_log(type=None)[source]¶
returns matplotlib figure object of a trace of selected type and period. Returns None if tracefile doesn’t exist.
- Parameters
- type: string
type of buffer file (pre,depre,pump,cooling, etc)
- period: integer
period index number
- show: boolean
optional plot show flag
- Returns
- object: matplotlib object
matplotlib object
Examples
>>> data = dataset.plot_log(type = 'all') >>> data.shape
- plot_trace(type=None, period=None, show=False)[source]¶
returns matplotlib figure object of a trace of selected type and period. Returns None if tracefile doesn’t exist.
- Parameters
- type: string
type of buffer file (pre,depre,pump,cooling, etc)
- period: integer
period index number
- show: boolean
optional plot show flag
- Returns
- object: matplotlib object
matplotlib object
Examples
>>> data = dataset.get_trace(period = 2, type = 'pump') >>> data.shape