The StateTimeSeries and StateVector

>>> from gwpy.timeseries import (StateTimeSeries, StateVector)

A large quantity of important data from gravitational-wave detectors can be distilled into simple boolean (True or False) statements informing something about the state of the instrument at a given time. These statements can be used to identify times during which a particular control system was active, or when the signal in a seismometer was above an alarming threshold, for example. In GWpy, these data are represented by special cases (sub-classes) of the TimeSeries object:

StateTimeSeries Boolean array representing a good/bad state determination of some data.
StateVector Binary array representing good/bad state determinations of some data.

State time-series

The example of a threshold on signal time-series is the core of a large amount of low-level data quality information, used in searches for gravitational waves, and detector characterisation, and is described by the StateTimeSeries object, a specific type of TimeSeries containing only boolean values.

These arrays can be generated from simple arrays of booleans, as follows:

>>> from gwpy.timeseries import StateTimeSeries
>>> state = StateTimeSeries([True, True, False, False, False, True, False],
                            sample_rate=1, epoch=1064534416)
>>> state
<StateTimeSeries([ True,  True, False, False, False,  True, False], dtype=bool
                 name=None
                 unit=Unit(dimensionless)
                 epoch=<Time object: scale='tai' format='gps' value=1064534416.0>
                 channel=None
                 sample_rate=<Quantity 1 Hz>)>

Alternatively, applying a standard mathematical comparison to a regular TimeSeries will return a StateTimeSeries:

>>> from gwpy.timeseries import TimeSeries
>>> seisdata = TimeSeries.fetch('L1:HPI-BS_BLRMS_Z_3_10', 1064534416, 1064538016)
>>> seisdata.unit = 'nm/s'
>>> highseismic = seisdata > 400
>>> highseismic
<StateTimeSeries([False, False, False, ..., False, False, False], dtype=bool
                 name='L1:HPI-BS_BLRMS_Z_3_10 > 400 nm / s'
                 unit=Unit("nm / s")
                 epoch=<Time object: scale='tai' format='gps' value=1064534416.0>
                 channel=Channel("L1:HPI-BS_BLRMS_Z_3_10")
                 sample_rate=<Quantity 16.0 Hz>)>

The StateTimeSeries includes a handy StateTimeSeries.to_dqflag() method to convert the boolean array into a DataQualityFlag:

>>> segments = highseismic.to_dqflag(round=True)
>>> segments
<DataQualityFlag(valid=[[1064534416 ... 1064538016)],
                 active=[[1064535295 ... 1064535296)
                         [1064535896 ... 1064535897)
                         [1064536969 ... 1064536970)
                         [1064537086 ... 1064537088)
                         [1064537528 ... 1064537529)],
                 ifo=None,
                 name=None,
                 version=None,
                 comment='L1:HPI-BS_BLRMS_Z_3_10 > 400 nm / s')>

Multi-bit state-vectors

Which the StateTimeSeries represents a single True/False statement about the state of a system, the StateVector gives a grouping of these, with a binary bitmask mapping bits in a binary word to descriptions of multiple states in a given compound system.

For example, the state of the full laser interferometer was described in Initial LIGO by a combination of separate states, including:

  • operator set to go to ‘science mode’
  • EPICS control system record (conlog) OK
  • instrument locked in all resonant cavities
  • no signal injections
  • no unauthorised excitations

Additionall, the higher bits 5-15 were set ‘ON’ at all times, such that the word 0xffff indicated ‘science mode’ operation of the instrument.

This StateVector can be read from the S6 frames as:

>>> from gwpy.timeseries import StateVector
>>> state = StateVector.fetch('H1:IFO-SV_STATE_VECTOR', 968631675, 968632275,
                              ['science', 'conlog', 'up', 'no injection', 'no excitation'])
>>> print(state)
StateVector([65528 65528 65528 ..., 65535 65535 65535],
            name=H1:IFO-SV_STATE_VECTOR,
            unit=,
            epoch=968631675.0,
            channel=H1:IFO-SV_STATE_VECTOR,
            sample_rate=16.0 Hz,
            bitmask=BitMask(0: science
                            1: conlog
                            2: up
                            3: no injection
                            4: no excitation,
                            channel=H1:IFO-SV_STATE_VECTOR,
                            epoch=968631675.0))

As can be seen, the input bitmask (['science', 'conlog', 'up', 'no injection', 'no excitation']) is represented through the BitMask class, recording the bits as a list with some metdata about their purpose.

The StateVector fetched in the above example can then be parsed into a series of DataQualityFlag objects, recording the active segments for that bit in the vector:

>>> flags = state.to_dqflags(round=True)
>>> print(flags[0])
<DataQualityFlag(valid=[[968631675 ... 968632275)],
                 active=[[968632248 ... 968632275)],
                 ifo=None,
                 name='science',
                 version=None,
                 comment='H1:IFO-SV_STATE_VECTOR bit 0')>

Class reference

This reference contains the following Class entries:

StateVector Binary array representing good/bad state determinations of some data.
StateTimeSeries Boolean array representing a good/bad state determination of some data.
StateVectorDict
class gwpy.timeseries.StateVector[source]

Bases: gwpy.timeseries.core.TimeSeriesBase

Binary array representing good/bad state determinations of some data.

Each binary bit represents a single boolean condition, with the definitions of all the bits stored in the StateVector.bits attribute.

Parameters:

value : array-like

input data array

unit : Unit, optional

physical unit of these data

epoch : LIGOTimeGPS, float, str

GPS epoch associated with these data, any input parsable by to_gps is fine

bits : Bits, list, optional

list of bits defining this StateVector

sample_rate : float, Quantity, optional, default: 1

the rate of samples per second (Hertz)

times : array-like

the complete array of GPS times accompanying the data for this series. This argument takes precedence over epoch and sample_rate so should be given in place of these if relevant, not alongside

name : str, optional, default: None

descriptive title for this array

channel : Channel, str

source data stream for these data

dtype : dtype, optional, default: None

input data type

copy : bool, optional, default: False

choose to copy the input data to new memory

subok : bool, optional, default: True

allow passing of sub-classes by the array generator

Notes

Key methods:

fetch(channel, start, end[, bits, host, ...]) Fetch data from NDS into a StateVector.
read(*args, **kwargs) Read data into a StateVector
write(data, *args, **kwargs) Write this StateVector to a file
to_dqflags([bits, minlen, dtype, round]) Convert this StateVector into a DataQualityDict
plot([format, bits]) Plot the data for this StateVector

Attributes Summary

T Same as self.transpose(), except that self is returned if self.ndim < 2.
base Base object if memory is from some other object.
bits The list of bit names for this StateVector.
boolean A mapping of this StateVector to a 2-D array containing all binary bits as booleans, for each time point.
cgs Returns a copy of the current Quantity instance with CGS units.
channel Instrumental channel associated with these data
ctypes An object to simplify the interaction of the array with the ctypes module.
data
dt X-axis sample separation
dtype Data-type of the array’s elements.
duration Duration of this series in seconds
dx X-axis sample separation
epoch GPS epoch for these data.
equivalencies A list of equivalencies that will be applied by default during unit conversions.
flags Information about the memory layout of the array.
flat A 1-D iterator over the Quantity array.
imag The imaginary part of the array.
info Container for meta information like name, description, format.
isscalar True if the value of this quantity is a scalar, or False if it is an array-like object.
itemsize Length of one array element in bytes.
name Name for this data set
nbytes Total bytes consumed by the elements of the array.
ndim Number of array dimensions.
real The real part of the array.
sample_rate Data rate for this TimeSeries in samples per second (Hertz).
shape Tuple of array dimensions.
si Returns a copy of the current Quantity instance with SI units.
size Number of elements in the array.
span X-axis [low, high) segment encompassed by these data
strides Tuple of bytes to step in each dimension when traversing an array.
times Series of GPS times for each sample
unit The physical unit of these data
value The numerical value of this quantity.
x0 X-axis value of the first data point
xindex Positions of the data on the x-axis
xspan X-axis [low, high) segment encompassed by these data
xunit Unit of x-axis index

Methods Summary

all([axis, out, keepdims]) Returns True if all elements evaluate to True.
any([axis, out, keepdims]) Returns True if any of the elements of a evaluate to True.
append(other[, gap, inplace, pad, resize]) Connect another series onto the end of the current one.
argmax([axis, out]) Return indices of the maximum values along the given axis.
argmin([axis, out]) Return indices of the minimum values along the given axis of a.
argpartition(kth[, axis, kind, order]) Returns the indices that would partition this array.
argsort([axis, kind, order]) Returns the indices that would sort this array.
astype(dtype[, order, casting, subok, copy]) Copy of the array, cast to a specified type.
byteswap(inplace) Swap the bytes of the array elements
choose(choices[, out, mode]) Use an index array to construct a new array from a set of choices.
clip([min, max, out]) Return an array whose values are limited to [min, max].
compress(condition[, axis, out]) Return selected slices of this array along given axis.
conj() Complex-conjugate all elements.
conjugate() Return the complex conjugate, element-wise.
copy([order]) Return a copy of the array.
copy_metadata() Return a deepcopy of the metadata for this array
crop([start, end, copy]) Crop this series to the given x-axis extent.
cumprod([axis, dtype, out]) Return the cumulative product of the elements along the given axis.
cumsum([axis, dtype, out]) Return the cumulative sum of the elements along the given axis.
decompose([bases]) Generates a new Quantity with the units decomposed.
diagonal([offset, axis1, axis2]) Return specified diagonals.
diff([n, axis]) Calculate the n-th discrete difference along given axis.
dot(b[, out]) Dot product of two arrays.
dump(file) Dump a pickle of the array to the specified file.
dumps() Returns the pickle of the array as a string.
ediff1d([to_end, to_begin])
fetch(channel, start, end[, bits, host, ...]) Fetch data from NDS into a StateVector.
fetch_open_data(ifo, start, end[, name, host]) Fetch open-access data from the LIGO Open Science Center
fill(value) Fill the array with a scalar value.
find(channel, start, end[, frametype, pad, ...]) Find and read data from frames for a channel
flatten([order]) Return a copy of the array collapsed into one dimension.
from_hdf5(*args, **kwargs) Read an array from the given HDF file.
from_lal(*args, **kwargs) Generate a new TimeSeries from a LAL TimeSeries of any type.
from_nds2_buffer(*args, **kwargs) Construct a new TimeSeries from an nds2.buffer object
from_pycbc(ts) Convert a pycbc.types.timeseries.TimeSeries into a TimeSeries
get(channel, start, end[, bits]) Get data for this channel from frames or NDS
get_bit_series([bits]) Get the StateTimeSeries for each bit of this StateVector.
getfield(dtype[, offset]) Returns a field of the given array as a certain type.
insert(obj, values[, axis]) Insert values along the given axis before the given indices and return a new Quantity object.
is_compatible(other) Check whether this series and other have compatible metadata
is_contiguous(other[, tol]) Check whether other is contiguous with self.
item(*args) Copy an element of an array to a standard Python scalar and return it.
itemset(*args) Insert scalar into an array (scalar is cast to array’s dtype, if possible)
max([axis, out]) Return the maximum along a given axis.
mean([axis, dtype, out, keepdims]) Returns the average of the array elements along given axis.
median([axis]) Compute the median along the specified axis.
min([axis, out, keepdims]) Return the minimum along a given axis.
nansum([axis, out, keepdims])
newbyteorder([new_order]) Return the array with the same data viewed with a different byte order.
nonzero() Return the indices of the elements that are non-zero.
override_unit(unit[, parse_strict]) Forcefully reset the unit of these data
pad(pad_width, **kwargs) Pad this series to a new size
partition(kth[, axis, kind, order]) Rearranges the elements in the array in such a way that value of the element in kth position is in the position it would be in a sorted array.
plot([format, bits]) Plot the data for this StateVector
prepend(other[, gap, inplace, pad, resize]) Connect another series onto the start of the current one.
prod([axis, dtype, out, keepdims]) Return the product of the array elements over the given axis
ptp([axis, out]) Peak to peak (maximum - minimum) value along a given axis.
put(indices, values[, mode]) Set a.flat[n] = values[n] for all n in indices.
ravel([order]) Return a flattened array.
read(*args, **kwargs) Read data into a StateVector
repeat(repeats[, axis]) Repeat elements of an array.
resample(rate) Resample this StateVector to a new rate
reshape(shape[, order]) Returns an array containing the same data with a new shape.
resize(new_shape[, refcheck]) Change shape and size of array in-place.
round([decimals, out]) Return a with each element rounded to the given number of decimals.
searchsorted(v[, side, sorter]) Find indices where elements of v should be inserted in a to maintain order.
setfield(val, dtype[, offset]) Put a value into a specified place in a field defined by a data-type.
setflags([write, align, uic]) Set array flags WRITEABLE, ALIGNED, and UPDATEIFCOPY, respectively.
sort([axis, kind, order]) Sort an array, in-place.
squeeze([axis]) Remove single-dimensional entries from the shape of a.
std([axis, dtype, out, ddof, keepdims]) Returns the standard deviation of the array elements along given axis.
sum([axis, dtype, out, keepdims]) Return the sum of the array elements over the given axis.
swapaxes(axis1, axis2) Return a view of the array with axis1 and axis2 interchanged.
take(indices[, axis, out, mode]) Return an array formed from the elements of a at the given indices.
to(unit[, equivalencies]) Returns a new Quantity object with the specified units.
to_dqflags([bits, minlen, dtype, round]) Convert this StateVector into a DataQualityDict
to_hdf5(*args, **kwargs) Convert this array to a h5py.Dataset.
to_lal(*args, **kwargs) Bogus function inherited from superclass, do not use.
to_pycbc(*args, **kwargs) Convert this TimeSeries into a PyCBC
tobytes([order]) Construct Python bytes containing the raw data bytes in the array.
tofile(fid[, sep, format]) Write array to a file as text or binary (default).
tolist() Return the array as a (possibly nested) list.
tostring([order]) Construct Python bytes containing the raw data bytes in the array.
trace([offset, axis1, axis2, dtype, out]) Return the sum along diagonals of the array.
transpose(*axes) Returns a view of the array with axes transposed.
update(other[, inplace]) Update this series by appending new data from an other and dropping the same amount of data off the start.
value_at(x) Return the value of this Series at the given xindex value
var([axis, dtype, out, ddof, keepdims]) Returns the variance of the array elements, along given axis.
view([dtype, type]) New view of array with the same data.
write(data, *args, **kwargs) Write this StateVector to a file
zip() Zip the xindex and value arrays of this Series

Attributes Documentation

T

Same as self.transpose(), except that self is returned if self.ndim < 2.

Examples

>>> x = np.array([[1.,2.],[3.,4.]])
>>> x
array([[ 1.,  2.],
       [ 3.,  4.]])
>>> x.T
array([[ 1.,  3.],
       [ 2.,  4.]])
>>> x = np.array([1.,2.,3.,4.])
>>> x
array([ 1.,  2.,  3.,  4.])
>>> x.T
array([ 1.,  2.,  3.,  4.])
base

Base object if memory is from some other object.

Examples

The base of an array that owns its memory is None:

>>> x = np.array([1,2,3,4])
>>> x.base is None
True

Slicing creates a view, whose memory is shared with x:

>>> y = x[2:]
>>> y.base is x
True
bits

The list of bit names for this StateVector.

Type:Bits
boolean

A mapping of this StateVector to a 2-D array containing all binary bits as booleans, for each time point.

cgs

Returns a copy of the current Quantity instance with CGS units. The value of the resulting object will be scaled.

channel

Instrumental channel associated with these data

Type:Channel
ctypes

An object to simplify the interaction of the array with the ctypes module.

This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library.

Parameters:

None :

Returns:

c : Python object

Possessing attributes data, shape, strides, etc.

See also

numpy.ctypeslib

Notes

Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes):

  • data: A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as self._array_interface_[‘data’][0].
  • shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to dtype(‘p’) on this platform. This base-type could be c_int, c_long, or c_longlong depending on the platform. The c_intp type is defined accordingly in numpy.ctypeslib. The ctypes array contains the shape of the underlying array.
  • strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array.
  • data_as(obj): Return the data pointer cast to a particular c-types object. For example, calling self._as_parameter_ is equivalent to self.data_as(ctypes.c_void_p). Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: self.data_as(ctypes.POINTER(ctypes.c_double)).
  • shape_as(obj): Return the shape tuple as an array of some other c-types type. For example: self.shape_as(ctypes.c_short).
  • strides_as(obj): Return the strides tuple as an array of some other c-types type. For example: self.strides_as(ctypes.c_longlong).

Be careful using the ctypes attribute - especially on temporary arrays or arrays constructed on the fly. For example, calling (a+b).ctypes.data_as(ctypes.c_void_p) returns a pointer to memory that is invalid because the array created as (a+b) is deallocated before the next Python statement. You can avoid this problem using either c=a+b or ct=(a+b).ctypes. In the latter case, ct will hold a reference to the array until ct is deleted or re-assigned.

If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the as parameter attribute which will return an integer equal to the data attribute.

Examples

>>> import ctypes
>>> x
array([[0, 1],
       [2, 3]])
>>> x.ctypes.data
30439712
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long))
<ctypes.LP_c_long object at 0x01F01300>
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long)).contents
c_long(0)
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_longlong)).contents
c_longlong(4294967296L)
>>> x.ctypes.shape
<numpy.core._internal.c_long_Array_2 object at 0x01FFD580>
>>> x.ctypes.shape_as(ctypes.c_long)
<numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
>>> x.ctypes.strides
<numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
>>> x.ctypes.strides_as(ctypes.c_longlong)
<numpy.core._internal.c_longlong_Array_2 object at 0x01F01300>
data
dt

X-axis sample separation

Type:Quantity scalar
dtype

Data-type of the array’s elements.

Parameters:None :
Returns:d : numpy dtype object

See also

numpy.dtype

Examples

>>> x
array([[0, 1],
       [2, 3]])
>>> x.dtype
dtype('int32')
>>> type(x.dtype)
<type 'numpy.dtype'>
duration

Duration of this series in seconds

dx

X-axis sample separation

Type:Quantity scalar
epoch

GPS epoch for these data.

This attribute is stored internally by the x0 attribute

Type:Time
equivalencies

A list of equivalencies that will be applied by default during unit conversions.

flags

Information about the memory layout of the array.

Notes

The flags object can be accessed dictionary-like (as in a.flags['WRITEABLE']), or by using lowercased attribute names (as in a.flags.writeable). Short flag names are only supported in dictionary access.

Only the UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling ndarray.setflags.

The array flags cannot be set arbitrarily:

  • UPDATEIFCOPY can only be set False.
  • ALIGNED can only be set True if the data is truly aligned.
  • WRITEABLE can only be set True if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string.

Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays.

Even for contiguous arrays a stride for a given dimension arr.strides[dim] may be arbitrary if arr.shape[dim] == 1 or the array has no elements. It does not generally hold that self.strides[-1] == self.itemsize for C-style contiguous arrays or self.strides[0] == self.itemsize for Fortran-style contiguous arrays is true.

Attributes

flat

A 1-D iterator over the Quantity array.

This returns a QuantityIterator instance, which behaves the same as the flatiter instance returned by flat, and is similar to, but not a subclass of, Python’s built-in iterator object.

imag

The imaginary part of the array.

Examples

>>> x = np.sqrt([1+0j, 0+1j])
>>> x.imag
array([ 0.        ,  0.70710678])
>>> x.imag.dtype
dtype('float64')
info

Container for meta information like name, description, format. This is required when the object is used as a mixin column within a table, but can be used as a general way to store meta information.

isscalar

True if the value of this quantity is a scalar, or False if it is an array-like object.

Note

This is subtly different from numpy.isscalar in that numpy.isscalar returns False for a zero-dimensional array (e.g. np.array(1)), while this is True for quantities, since quantities cannot represent true numpy scalars.

itemsize

Length of one array element in bytes.

Examples

>>> x = np.array([1,2,3], dtype=np.float64)
>>> x.itemsize
8
>>> x = np.array([1,2,3], dtype=np.complex128)
>>> x.itemsize
16
name

Name for this data set

Type:str
nbytes

Total bytes consumed by the elements of the array.

Notes

Does not include memory consumed by non-element attributes of the array object.

Examples

>>> x = np.zeros((3,5,2), dtype=np.complex128)
>>> x.nbytes
480
>>> np.prod(x.shape) * x.itemsize
480
ndim

Number of array dimensions.

Examples

>>> x = np.array([1, 2, 3])
>>> x.ndim
1
>>> y = np.zeros((2, 3, 4))
>>> y.ndim
3
real

The real part of the array.

See also

numpy.real
equivalent function

Examples

>>> x = np.sqrt([1+0j, 0+1j])
>>> x.real
array([ 1.        ,  0.70710678])
>>> x.real.dtype
dtype('float64')
sample_rate

Data rate for this TimeSeries in samples per second (Hertz).

This attribute is stored internally by the dx attribute

Type:Quantity scalar
shape

Tuple of array dimensions.

Notes

May be used to “reshape” the array, as long as this would not require a change in the total number of elements

Examples

>>> x = np.array([1, 2, 3, 4])
>>> x.shape
(4,)
>>> y = np.zeros((2, 3, 4))
>>> y.shape
(2, 3, 4)
>>> y.shape = (3, 8)
>>> y
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.]])
>>> y.shape = (3, 6)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: total size of new array must be unchanged
si

Returns a copy of the current Quantity instance with SI units. The value of the resulting object will be scaled.

size

Number of elements in the array.

Equivalent to np.prod(a.shape), i.e., the product of the array’s dimensions.

Examples

>>> x = np.zeros((3, 5, 2), dtype=np.complex128)
>>> x.size
30
>>> np.prod(x.shape)
30
span

X-axis [low, high) segment encompassed by these data

Type:Segment
strides

Tuple of bytes to step in each dimension when traversing an array.

The byte offset of element (i[0], i[1], ..., i[n]) in an array a is:

offset = sum(np.array(i) * a.strides)

A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide.

See also

numpy.lib.stride_tricks.as_strided

Notes

Imagine an array of 32-bit integers (each 4 bytes):

x = np.array([[0, 1, 2, 3, 4],
              [5, 6, 7, 8, 9]], dtype=np.int32)

This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array x will be (20, 4).

Examples

>>> y = np.reshape(np.arange(2*3*4), (2,3,4))
>>> y
array([[[ 0,  1,  2,  3],
        [ 4,  5,  6,  7],
        [ 8,  9, 10, 11]],
       [[12, 13, 14, 15],
        [16, 17, 18, 19],
        [20, 21, 22, 23]]])
>>> y.strides
(48, 16, 4)
>>> y[1,1,1]
17
>>> offset=sum(y.strides * np.array((1,1,1)))
>>> offset/y.itemsize
17
>>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0)
>>> x.strides
(32, 4, 224, 1344)
>>> i = np.array([3,5,2,2])
>>> offset = sum(i * x.strides)
>>> x[3,5,2,2]
813
>>> offset / x.itemsize
813
times

Series of GPS times for each sample

unit

The physical unit of these data

Type:UnitBase
value

The numerical value of this quantity.

x0

X-axis value of the first data point

Type:Quantity scalar
xindex

Positions of the data on the x-axis

Type:Quantity array
xspan

X-axis [low, high) segment encompassed by these data

Type:Segment
xunit

Unit of x-axis index

Type:Unit

Methods Documentation

all(axis=None, out=None, keepdims=False)

Returns True if all elements evaluate to True.

Refer to numpy.all for full documentation.

See also

numpy.all
equivalent function
any(axis=None, out=None, keepdims=False)

Returns True if any of the elements of a evaluate to True.

Refer to numpy.any for full documentation.

See also

numpy.any
equivalent function
append(other, gap='raise', inplace=True, pad=0.0, resize=True)[source]

Connect another series onto the end of the current one.

Parameters:

other : Series

another series of the same type to connect to this one

gap : str, optional, default: 'raise'

action to perform if there’s a gap between the other series and this one. One of

  • 'raise' - raise an Exception
    • 'ignore' - remove gap and join data
  • 'pad' - pad gap with zeros

inplace : bool, optional, default: True

perform operation in-place, modifying current Series, otherwise copy data and return new Series

Warning

inplace append bypasses the reference check in numpy.ndarray.resize, so be carefully to only use this for arrays that haven’t been sharing their memory!

pad : float, optional, default: 0.0

value with which to pad discontiguous series

resize : bool, optional, default: True

resize this array to accommodate new data, otherwise shift the old data to the left (potentially falling off the start) and put the new data in at the end

Returns:

series : Series

a new series containing joined data sets

argmax(axis=None, out=None)

Return indices of the maximum values along the given axis.

Refer to numpy.argmax for full documentation.

See also

numpy.argmax
equivalent function
argmin(axis=None, out=None)

Return indices of the minimum values along the given axis of a.

Refer to numpy.argmin for detailed documentation.

See also

numpy.argmin
equivalent function
argpartition(kth, axis=-1, kind='introselect', order=None)

Returns the indices that would partition this array.

Refer to numpy.argpartition for full documentation.

New in version 1.8.0.

See also

numpy.argpartition
equivalent function
argsort(axis=-1, kind='quicksort', order=None)

Returns the indices that would sort this array.

Refer to numpy.argsort for full documentation.

See also

numpy.argsort
equivalent function
astype(dtype, order='K', casting='unsafe', subok=True, copy=True)

Copy of the array, cast to a specified type.

Parameters:

dtype : str or dtype

Typecode or data-type to which the array is cast.

order : {‘C’, ‘F’, ‘A’, ‘K’}, optional

Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’.

casting : {‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional

Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility.

  • ‘no’ means the data types should not be cast at all.
  • ‘equiv’ means only byte-order changes are allowed.
  • ‘safe’ means only casts which can preserve values are allowed.
  • ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed.
  • ‘unsafe’ means any data conversions may be done.

subok : bool, optional

If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array.

copy : bool, optional

By default, astype always returns a newly allocated array. If this is set to false, and the dtype, order, and subok requirements are satisfied, the input array is returned instead of a copy.

Returns:

arr_t : ndarray

Unless copy is False and the other conditions for returning the input array are satisfied (see description for copy input parameter), arr_t is a new array of the same shape as the input array, with dtype, order given by dtype, order.

Raises:

ComplexWarning :

When casting from complex to float or int. To avoid this, one should use a.real.astype(t).

Notes

Starting in NumPy 1.9, astype method now returns an error if the string dtype to cast to is not long enough in ‘safe’ casting mode to hold the max value of integer/float array that is being casted. Previously the casting was allowed even if the result was truncated.

Examples

>>> x = np.array([1, 2, 2.5])
>>> x
array([ 1. ,  2. ,  2.5])
>>> x.astype(int)
array([1, 2, 2])
byteswap(inplace)

Swap the bytes of the array elements

Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place.

Parameters:

inplace : bool, optional

If True, swap bytes in-place, default is False.

Returns:

out : ndarray

The byteswapped array. If inplace is True, this is a view to self.

Examples

>>> A = np.array([1, 256, 8755], dtype=np.int16)
>>> map(hex, A)
['0x1', '0x100', '0x2233']
>>> A.byteswap(True)
array([  256,     1, 13090], dtype=int16)
>>> map(hex, A)
['0x100', '0x1', '0x3322']

Arrays of strings are not swapped

>>> A = np.array(['ceg', 'fac'])
>>> A.byteswap()
array(['ceg', 'fac'],
      dtype='|S3')
choose(choices, out=None, mode='raise')

Use an index array to construct a new array from a set of choices.

Refer to numpy.choose for full documentation.

See also

numpy.choose
equivalent function
clip(min=None, max=None, out=None)

Return an array whose values are limited to [min, max]. One of max or min must be given.

Refer to numpy.clip for full documentation.

See also

numpy.clip
equivalent function
compress(condition, axis=None, out=None)

Return selected slices of this array along given axis.

Refer to numpy.compress for full documentation.

See also

numpy.compress
equivalent function
conj()

Complex-conjugate all elements.

Refer to numpy.conjugate for full documentation.

See also

numpy.conjugate
equivalent function
conjugate()

Return the complex conjugate, element-wise.

Refer to numpy.conjugate for full documentation.

See also

numpy.conjugate
equivalent function
copy(order='C')[source]

Return a copy of the array.

Parameters:

order : {‘C’, ‘F’, ‘A’, ‘K’}, optional

Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible. (Note that this function and :func:numpy.copy are very similar, but have different default values for their order= arguments.)

Examples

>>> x = np.array([[1,2,3],[4,5,6]], order='F')
>>> y = x.copy()
>>> x.fill(0)
>>> x
array([[0, 0, 0],
       [0, 0, 0]])
>>> y
array([[1, 2, 3],
       [4, 5, 6]])
>>> y.flags['C_CONTIGUOUS']
True
copy_metadata()[source]

Return a deepcopy of the metadata for this array

crop(start=None, end=None, copy=False)[source]

Crop this series to the given x-axis extent.

Parameters:

start : float, optional

lower limit of x-axis to crop to, defaults to current x0

end : float, optional

upper limit of x-axis to crop to, defaults to current series end

copy : bool, optional, default: False

copy the input data to fresh memory, otherwise return a view

Returns:

series : Series

A new series with a sub-set of the input data

Notes

If either start or end are outside of the original Series span, warnings will be printed and the limits will be restricted to the xspan

cumprod(axis=None, dtype=None, out=None)

Return the cumulative product of the elements along the given axis.

Refer to numpy.cumprod for full documentation.

See also

numpy.cumprod
equivalent function
cumsum(axis=None, dtype=None, out=None)

Return the cumulative sum of the elements along the given axis.

Refer to numpy.cumsum for full documentation.

See also

numpy.cumsum
equivalent function
decompose(bases=[])

Generates a new Quantity with the units decomposed. Decomposed units have only irreducible units in them (see astropy.units.UnitBase.decompose).

Parameters:

bases : sequence of UnitBase, optional

The bases to decompose into. When not provided, decomposes down to any irreducible units. When provided, the decomposed result will only contain the given units. This will raises a UnitsError if it’s not possible to do so.

Returns:

newq : Quantity

A new object equal to this quantity with units decomposed.

diagonal(offset=0, axis1=0, axis2=1)

Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed.

Refer to numpy.diagonal() for full documentation.

See also

numpy.diagonal
equivalent function
diff(n=1, axis=-1)[source]

Calculate the n-th discrete difference along given axis.

The first difference is given by out[n] = a[n+1] - a[n] along the given axis, higher differences are calculated by using diff recursively.

Parameters:

a : array_like

Input array

n
: int, optional

The number of times values are differenced.

axis
: int, optional

The axis along which the difference is taken, default is the last axis.

Returns:

diff : ndarray

The n-th differences. The shape of the output is the same as a except along axis where the dimension is smaller by n.

. :

Examples

>>> x = np.array([1, 2, 4, 7, 0])
>>> np.diff(x)
array([ 1,  2,  3, -7])
>>> np.diff(x, n=2)
array([  1,   1, -10])
>>> x = np.array([[1, 3, 6, 10], [0, 5, 6, 8]])
>>> np.diff(x)
array([[2, 3, 4],
       [5, 1, 2]])
>>> np.diff(x, axis=0)
array([[-1,  2,  0, -2]])
dot(b, out=None)

Dot product of two arrays.

Refer to numpy.dot for full documentation.

See also

numpy.dot
equivalent function

Examples

>>> a = np.eye(2)
>>> b = np.ones((2, 2)) * 2
>>> a.dot(b)
array([[ 2.,  2.],
       [ 2.,  2.]])

This array method can be conveniently chained:

>>> a.dot(b).dot(b)
array([[ 8.,  8.],
       [ 8.,  8.]])
dump(file)

Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load.

Parameters:

file : str

A string naming the dump file.

dumps()[source]

Returns the pickle of the array as a string. pickle.loads or numpy.loads will convert the string back to an array.

Parameters:None :
ediff1d(to_end=None, to_begin=None)
classmethod fetch(channel, start, end, bits=None, host=None, port=None, verbose=False, connection=None, type=102)[source]

Fetch data from NDS into a StateVector.

Parameters:

channel : str, Channel

the name of the channel to read, or a Channel object.

start : LIGOTimeGPS, float, str

GPS start time of required data, any input parseable by to_gps is fine

end : LIGOTimeGPS, float, str

GPS end time of required data, any input parseable by to_gps is fine

bits : Bits, list, optional

definition of bits for this StateVector

host : str, optional

URL of NDS server to use, defaults to observatory site host

port : int, optional

port number for NDS server query, must be given with host

verify : bool, optional, default: True

check channels exist in database before asking for data

connection : NDS2Connection

open NDS connection to use

verbose : bool, optional

print verbose output about NDS progress

type : int, optional

NDS2 channel type integer

dtype : type, numpy.dtype, str, optional

identifier for desired output data type

classmethod fetch_open_data(ifo, start, end, name='quality/simple', host='https://losc.ligo.org')[source]

Fetch open-access data from the LIGO Open Science Center

Parameters:

ifo : str

the two-character prefix of the IFO in which you are interested, e.g. 'L1'

start : LIGOTimeGPS, float, str, optional

GPS start time of required data, defaults to start of data found; any input parseable by to_gps is fine

end : LIGOTimeGPS, float, str, optional

GPS end time of required data, defaults to end of data found; any input parseable by to_gps is fine

name : str, optional

the full name of HDF5 dataset that represents the data you want, e.g. 'strain/Strain' for _h(t)_ data, or 'quality/simple' for basic data-quality information

host : str, optional

HTTP host name of LOSC server to access

fill(value)

Fill the array with a scalar value.

Parameters:

value : scalar

All elements of a will be assigned this value.

Examples

>>> a = np.array([1, 2])
>>> a.fill(0)
>>> a
array([0, 0])
>>> a = np.empty(2)
>>> a.fill(1)
>>> a
array([ 1.,  1.])
find(channel, start, end, frametype=None, pad=None, dtype=None, nproc=1, verbose=False, **readargs)[source]

Find and read data from frames for a channel

Parameters:

channel : str, Channel

the name of the channel to read, or a Channel object.

start : LIGOTimeGPS, float, str

GPS start time of required data, any input parseable by to_gps is fine

end : LIGOTimeGPS, float, str

GPS end time of required data, any input parseable by to_gps is fine

frametype : str, optional

name of frametype in which this channel is stored, will search for containing frame types if necessary

pad : float, optional

value with which to fill gaps in the source data, only used if gap is not given, or gap='pad' is given

nproc : int, optional, default: 1

number of parallel processes to use, serial process by default.

dtype : numpy.dtype, str, type, or dict

numeric data type for returned data, e.g. numpy.float, or dict of (channel, dtype) pairs

verbose : bool, optional

print verbose output about NDS progress.

**readargs :

any other keyword arguments to be passed to read()

flatten(order='C')

Return a copy of the array collapsed into one dimension.

Parameters:

order : {‘C’, ‘F’, ‘A’, ‘K’}, optional

‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if a is Fortran contiguous in memory, row-major order otherwise. ‘K’ means to flatten a in the order the elements occur in memory. The default is ‘C’.

Returns:

y : ndarray

A copy of the input array, flattened to one dimension.

See also

ravel
Return a flattened array.
flat
A 1-D flat iterator over the array.

Examples

>>> a = np.array([[1,2], [3,4]])
>>> a.flatten()
array([1, 2, 3, 4])
>>> a.flatten('F')
array([1, 3, 2, 4])
from_hdf5(*args, **kwargs)[source]

Read an array from the given HDF file.

This method has been deprecated in favour of the unified I/O method:

Class.write(..., format=’hdf5’)

from_lal(*args, **kwargs)[source]

Generate a new TimeSeries from a LAL TimeSeries of any type.

from_nds2_buffer(*args, **kwargs)[source]

Construct a new TimeSeries from an nds2.buffer object

Parameters:

buffer_ : nds2.buffer

the input NDS2-client buffer to read

**metadata :

any other metadata keyword arguments to pass to the TimeSeries constructor

Returns:

timeseries : TimeSeries

a new TimeSeries containing the data from the nds2.buffer, and the appropriate metadata

Notes

This classmethod requires the nds2-client package

from_pycbc(ts)[source]

Convert a pycbc.types.timeseries.TimeSeries into a TimeSeries

Parameters:

ts : pycbc.types.timeseries.TimeSeries

the input PyCBC TimeSeries array

Returns:

timeseries : TimeSeries

a GWpy version of the input timeseries

classmethod get(channel, start, end, bits=None, **kwargs)[source]

Get data for this channel from frames or NDS

Parameters:

channel : str, Channel

the name of the channel to read, or a Channel object.

start : LIGOTimeGPS, float, str

GPS start time of required data, any input parseable by to_gps is fine

end : LIGOTimeGPS, float, str

GPS end time of required data, any input parseable by to_gps is fine

bits : Bits, list, optional

definition of bits for this StateVector

pad : float, optional

value with which to fill gaps in the source data, only used if gap is not given, or gap='pad' is given

dtype : numpy.dtype, str, type, or dict

numeric data type for returned data, e.g. numpy.float, or dict of (channel, dtype) pairs

nproc : int, optional, default: 1

number of parallel processes to use, serial process by default.

verbose : bool, optional

print verbose output about NDS progress.

**kwargs other keyword arguments to pass to either :

find() (for direct GWF file access) or fetch() for remote NDS2 access

See also

StateVector.fetch
for grabbing data from a remote NDS2 server
StateVector.find
for discovering and reading data from local GWF files
get_bit_series(bits=None)[source]

Get the StateTimeSeries for each bit of this StateVector.

Parameters:

bits : list, optional

a list of bit indices or bit names, defaults to bits

Returns:

bitseries : TimeSeriesDict

a TimeSeriesDict of StateTimeSeries, one for each given bit

getfield(dtype, offset=0)

Returns a field of the given array as a certain type.

A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes.

Parameters:

dtype : str or dtype

The data type of the view. The dtype size of the view can not be larger than that of the array itself.

offset : int

Number of bytes to skip before beginning the element view.

Examples

>>> x = np.diag([1.+1.j]*2)
>>> x[1, 1] = 2 + 4.j
>>> x
array([[ 1.+1.j,  0.+0.j],
       [ 0.+0.j,  2.+4.j]])
>>> x.getfield(np.float64)
array([[ 1.,  0.],
       [ 0.,  2.]])

By choosing an offset of 8 bytes we can select the complex part of the array for our view:

>>> x.getfield(np.float64, offset=8)
array([[ 1.,  0.],
   [ 0.,  4.]])
insert(obj, values, axis=None)

Insert values along the given axis before the given indices and return a new Quantity object.

This is a thin wrapper around the numpy.insert function.

Parameters:

obj : int, slice or sequence of ints

Object that defines the index or indices before which values is inserted.

values : array-like

Values to insert. If the type of values is different from that of quantity, values is converted to the matching type. values should be shaped so that it can be broadcast appropriately The unit of values must be consistent with this quantity.

axis : int, optional

Axis along which to insert values. If axis is None then the quantity array is flattened before insertion.

Returns:

out : Quantity

A copy of quantity with values inserted. Note that the insertion does not occur in-place: a new quantity array is returned.

Examples

>>> import astropy.units as u
>>> q = [1, 2] * u.m
>>> q.insert(0, 50 * u.cm)
<Quantity [ 0.5,  1.,  2.] m>
>>> q = [[1, 2], [3, 4]] * u.m
>>> q.insert(1, [10, 20] * u.m, axis=0)
<Quantity [[  1.,  2.],
           [ 10., 20.],
           [  3.,  4.]] m>
>>> q.insert(1, 10 * u.m, axis=1)
<Quantity [[  1., 10.,  2.],
           [  3., 10.,  4.]] m>
is_compatible(other)[source]

Check whether this series and other have compatible metadata

This method tests that the sample size, and the unit match.

is_contiguous(other, tol=3.814697265625e-06)[source]

Check whether other is contiguous with self.

Parameters:

other : Series, numpy.ndarray

another series of the same type to test for contiguity

tol : float, optional

the numerical tolerance of the test

Returns:

1 :

if other is contiguous with this series, i.e. would attach seamlessly onto the end

-1 :

if other is anti-contiguous with this seires, i.e. would attach seamlessly onto the start

0 :

if other is completely dis-contiguous with thie series

Notes

if a raw numpy.ndarray is passed as other, with no metadata, then the contiguity check will always pass

item(*args)

Copy an element of an array to a standard Python scalar and return it.

Parameters:

*args : Arguments (variable number and type)

  • none: in this case, the method only works for arrays with one element (a.size == 1), which element is copied into a standard Python scalar object and returned.
  • int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return.
  • tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array.
Returns:

z : Standard Python scalar object

A copy of the specified element of the array as a suitable Python scalar

Notes

When the data type of a is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned.

item is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math.

Examples

>>> x = np.random.randint(9, size=(3, 3))
>>> x
array([[3, 1, 7],
       [2, 8, 3],
       [8, 5, 3]])
>>> x.item(3)
2
>>> x.item(7)
5
>>> x.item((0, 1))
1
>>> x.item((2, 2))
3
itemset(*args)

Insert scalar into an array (scalar is cast to array’s dtype, if possible)

There must be at least 1 argument, and define the last argument as item. Then, a.itemset(*args) is equivalent to but faster than a[args] = item. The item should be a scalar value and args must select a single item in the array a.

Parameters:

*args : Arguments

If one argument: a scalar, only used in case a is of size 1. If two arguments: the last argument is the value to be set and must be a scalar, the first argument specifies a single array element location. It is either an int or a tuple.

Notes

Compared to indexing syntax, itemset provides some speed increase for placing a scalar into a particular location in an ndarray, if you must do this. However, generally this is discouraged: among other problems, it complicates the appearance of the code. Also, when using itemset (and item) inside a loop, be sure to assign the methods to a local variable to avoid the attribute look-up at each loop iteration.

Examples

>>> x = np.random.randint(9, size=(3, 3))
>>> x
array([[3, 1, 7],
       [2, 8, 3],
       [8, 5, 3]])
>>> x.itemset(4, 0)
>>> x.itemset((2, 2), 9)
>>> x
array([[3, 1, 7],
       [2, 0, 3],
       [8, 5, 9]])
max(axis=None, out=None)

Return the maximum along a given axis.

Refer to numpy.amax for full documentation.

See also

numpy.amax
equivalent function
mean(axis=None, dtype=None, out=None, keepdims=False)

Returns the average of the array elements along given axis.

Refer to numpy.mean for full documentation.

See also

numpy.mean
equivalent function
median(axis=None, **kwargs)[source]

Compute the median along the specified axis.

Returns the median of the array elements.

Parameters:

a : array_like

Input array or object that can be converted to an array.

axis : {int, sequence of int, None}, optional

Axis or axes along which the medians are computed. The default is to compute the median along a flattened version of the array. A sequence of axes is supported since version 1.9.0.

out : ndarray, optional

Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary.

overwrite_input : bool, optional

If True, then allow use of memory of input array a for calculations. The input array will be modified by the call to median. This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. If overwrite_input is True and a is not already an ndarray, an error will be raised.

keepdims : bool, optional

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original arr.

New in version 1.9.0.

Returns:

median : ndarray

A new array holding the result. If the input contains integers or floats smaller than float64, then the output data-type is np.float64. Otherwise, the data-type of the output is the same as that of the input. If out is specified, that array is returned instead.

See also

mean, percentile

Notes

Given a vector V of length N, the median of V is the middle value of a sorted copy of V, V_sorted - i e., V_sorted[(N-1)/2], when N is odd, and the average of the two middle values of V_sorted when N is even.

Examples

>>> a = np.array([[10, 7, 4], [3, 2, 1]])
>>> a
array([[10,  7,  4],
       [ 3,  2,  1]])
>>> np.median(a)
3.5
>>> np.median(a, axis=0)
array([ 6.5,  4.5,  2.5])
>>> np.median(a, axis=1)
array([ 7.,  2.])
>>> m = np.median(a, axis=0)
>>> out = np.zeros_like(m)
>>> np.median(a, axis=0, out=m)
array([ 6.5,  4.5,  2.5])
>>> m
array([ 6.5,  4.5,  2.5])
>>> b = a.copy()
>>> np.median(b, axis=1, overwrite_input=True)
array([ 7.,  2.])
>>> assert not np.all(a==b)
>>> b = a.copy()
>>> np.median(b, axis=None, overwrite_input=True)
3.5
>>> assert not np.all(a==b)
min(axis=None, out=None, keepdims=False)

Return the minimum along a given axis.

Refer to numpy.amin for full documentation.

See also

numpy.amin
equivalent function
nansum(axis=None, out=None, keepdims=False)
newbyteorder(new_order='S')

Return the array with the same data viewed with a different byte order.

Equivalent to:

arr.view(arr.dtype.newbytorder(new_order))

Changes are also made in all fields and sub-arrays of the array data type.

Parameters:

new_order : string, optional

Byte order to force; a value from the byte order specifications below. new_order codes can be any of:

  • ‘S’ - swap dtype from current to opposite endian
  • {‘<’, ‘L’} - little endian
  • {‘>’, ‘B’} - big endian
  • {‘=’, ‘N’} - native order
  • {‘|’, ‘I’} - ignore (no change to byte order)

The default value (‘S’) results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of ‘B’ or ‘b’ or ‘biggish’ are valid to specify big-endian.

Returns:

new_arr : array

New array object with the dtype reflecting given change to the byte order.

nonzero()

Return the indices of the elements that are non-zero.

Refer to numpy.nonzero for full documentation.

See also

numpy.nonzero
equivalent function
override_unit(unit, parse_strict='raise')[source]

Forcefully reset the unit of these data

Use of this method is discouraged in favour of to(), which performs accurate conversions from one unit to another. The method should really only be used when the original unit of the array is plain wrong.

Parameters:

unit : Unit, str

the unit to force onto this array

Raises:

ValueError :

if a str cannot be parsed as a valid unit

pad(pad_width, **kwargs)[source]

Pad this series to a new size

Parameters:

pad_width : int, pair of ints

number of samples by which to pad each end of the array. Single int to pad both ends by the same amount, or (before, after) tuple to give uneven padding

**kwargs :

see numpy.pad() for kwarg documentation

Returns:

series : Series

the padded version of the input

See also

numpy.pad
for details on the underlying functionality
partition(kth, axis=-1, kind='introselect', order=None)

Rearranges the elements in the array in such a way that value of the element in kth position is in the position it would be in a sorted array. All elements smaller than the kth element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined.

New in version 1.8.0.

Parameters:

kth : int or sequence of ints

Element index to partition by. The kth element value will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order all elements in the partitions is undefined. If provided with a sequence of kth it will partition all elements indexed by kth of them into their sorted position at once.

axis : int, optional

Axis along which to sort. Default is -1, which means sort along the last axis.

kind : {‘introselect’}, optional

Selection algorithm. Default is ‘introselect’.

order : str or list of str, optional

When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties.

See also

numpy.partition
Return a parititioned copy of an array.
argpartition
Indirect partition.
sort
Full sort.

Notes

See np.partition for notes on the different algorithms.

Examples

>>> a = np.array([3, 4, 2, 1])
>>> a.partition(a, 3)
>>> a
array([2, 1, 3, 4])
>>> a.partition((1, 3))
array([1, 2, 3, 4])
plot(format='segments', bits=None, **kwargs)[source]

Plot the data for this StateVector

Parameters:

format : str, optional, default: 'segments'

type of plot to make, either ‘segments’ to plot the SegmentList for each bit, or ‘timeseries’ to plot the raw data for this StateVector

bits : list, optional

a list of bit indices or bit names, defaults to bits. This argument is ignored if format is not 'segments'

**kwargs :

other keyword arguments to be passed to either SegmentPlot or TimeSeriesPlot, depending on format.

Returns:

plot : SegmentPlot, or

TimeSeriesPlot

output plot object, subclass of Plot

prepend(other, gap='raise', inplace=True, pad=0.0, resize=True)[source]

Connect another series onto the start of the current one.

Parameters:

other : Series

another series of the same type as this one

gap : str, optional, default: 'raise'

action to perform if there’s a gap between the other series and this one. One of

  • 'raise' - raise an Exception
  • 'ignore' - remove gap and join data
  • 'pad' - pad gap with zeros

inplace : bool, optional, default: True

perform operation in-place, modifying current series, otherwise copy data and return new series

Warning

inplace prepend bypasses the reference check in numpy.ndarray.resize, so be carefully to only use this for arrays that haven’t been sharing their memory!

pad : float, optional, default: 0.0

value with which to pad discontiguous Series

resize : bool, optional, default: True

Returns:

series : TimeSeries

time-series containing joined data sets

prod(axis=None, dtype=None, out=None, keepdims=False)

Return the product of the array elements over the given axis

Refer to numpy.prod for full documentation.

See also

numpy.prod
equivalent function
ptp(axis=None, out=None)

Peak to peak (maximum - minimum) value along a given axis.

Refer to numpy.ptp for full documentation.

See also

numpy.ptp
equivalent function
put(indices, values, mode='raise')

Set a.flat[n] = values[n] for all n in indices.

Refer to numpy.put for full documentation.

See also

numpy.put
equivalent function
ravel([order])

Return a flattened array.

Refer to numpy.ravel for full documentation.

See also

numpy.ravel
equivalent function
ndarray.flat
a flat iterator on the array.
classmethod read(*args, **kwargs)

Read data into a StateVector

Parameters:

source : str, Cache

source of data, any of the following:

  • str path of single data file
  • str path of LAL-format cache file
  • Cache describing one or more data files,

channel : str, Channel

the name of the channel to read, or a Channel object.

start : LIGOTimeGPS, float, str, optional

GPS start time of required data, defaults to start of data found; any input parseable by to_gps is fine

end : LIGOTimeGPS, float, str, optional

GPS end time of required data, defaults to end of data found; any input parseable by to_gps is fine

bits : list, optional

list of bits names for this StateVector, give None at any point in the list to mask that bit

format : str, optional

source format identifier. If not given, the format will be detected if possible. See below for list of acceptable formats.

nproc : int, optional, default: 1

number of parallel processes to use, serial process by default.

Note

Parallel frame reading, via the nproc keyword argument, is only available when giving a Cache of frames, or using the format='cache' keyword argument.

gap : str, optional

how to handle gaps in the cache, one of

  • ‘ignore’: do nothing, let the undelying reader method handle it
  • ‘warn’: do nothing except print a warning to the screen
  • ‘raise’: raise an exception upon finding a gap (default)
  • ‘pad’: insert a value to fill the gaps

pad : float, optional

value with which to fill gaps in the source data, only used if gap is not given, or gap='pad' is given

Notes

The available built-in formats are:

Format Read Write Auto-identify
csv Yes Yes Yes
framecpp Yes Yes No
gwf Yes No Yes
hdf Yes Yes No
hdf5 Yes Yes Yes
lalframe Yes No No
losc Yes No No
txt Yes Yes Yes
repeat(repeats, axis=None)

Repeat elements of an array.

Refer to numpy.repeat for full documentation.

See also

numpy.repeat
equivalent function
resample(rate)[source]

Resample this StateVector to a new rate

Because of the nature of a state-vector, downsampling is done by taking the logical ‘and’ of all original samples in each new sampling interval, while upsampling is achieved by repeating samples.

Parameters:

rate : float

rate to which to resample this StateVector, must be a divisor of the original sample rate (when downsampling) or a multiple of the original (when upsampling).

Returns:

vector : StateVector

resampled version of the input StateVector

reshape(shape, order='C')

Returns an array containing the same data with a new shape.

Refer to numpy.reshape for full documentation.

See also

numpy.reshape
equivalent function
resize(new_shape, refcheck=True)

Change shape and size of array in-place.

Parameters:

new_shape : tuple of ints, or n ints

Shape of resized array.

refcheck : bool, optional

If False, reference count will not be checked. Default is True.

Returns:

None :

Raises:

ValueError :

If a does not own its own data or references or views to it exist, and the data memory must be changed.

SystemError :

If the order keyword argument is specified. This behaviour is a bug in NumPy.

See also

resize
Return a new array with the specified shape.

Notes

This reallocates space for the data area if necessary.

Only contiguous arrays (data elements consecutive in memory) can be resized.

The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set refcheck to False.

Examples

Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped:

>>> a = np.array([[0, 1], [2, 3]], order='C')
>>> a.resize((2, 1))
>>> a
array([[0],
       [1]])
>>> a = np.array([[0, 1], [2, 3]], order='F')
>>> a.resize((2, 1))
>>> a
array([[0],
       [2]])

Enlarging an array: as above, but missing entries are filled with zeros:

>>> b = np.array([[0, 1], [2, 3]])
>>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple
>>> b
array([[0, 1, 2],
       [3, 0, 0]])

Referencing an array prevents resizing...

>>> c = a
>>> a.resize((1, 1))
Traceback (most recent call last):
...
ValueError: cannot resize an array that has been referenced ...

Unless refcheck is False:

>>> a.resize((1, 1), refcheck=False)
>>> a
array([[0]])
>>> c
array([[0]])
round(decimals=0, out=None)

Return a with each element rounded to the given number of decimals.

Refer to numpy.around for full documentation.

See also

numpy.around
equivalent function
searchsorted(v, side='left', sorter=None)

Find indices where elements of v should be inserted in a to maintain order.

For full documentation, see numpy.searchsorted

See also

numpy.searchsorted
equivalent function
setfield(val, dtype, offset=0)

Put a value into a specified place in a field defined by a data-type.

Place val into a‘s field defined by dtype and beginning offset bytes into the field.

Parameters:

val : object

Value to be placed in field.

dtype : dtype object

Data-type of the field in which to place val.

offset : int, optional

The number of bytes into the field at which to place val.

Returns:

None :

See also

getfield

Examples

>>> x = np.eye(3)
>>> x.getfield(np.float64)
array([[ 1.,  0.,  0.],
       [ 0.,  1.,  0.],
       [ 0.,  0.,  1.]])
>>> x.setfield(3, np.int32)
>>> x.getfield(np.int32)
array([[3, 3, 3],
       [3, 3, 3],
       [3, 3, 3]])
>>> x
array([[  1.00000000e+000,   1.48219694e-323,   1.48219694e-323],
       [  1.48219694e-323,   1.00000000e+000,   1.48219694e-323],
       [  1.48219694e-323,   1.48219694e-323,   1.00000000e+000]])
>>> x.setfield(np.eye(3), np.int32)
>>> x
array([[ 1.,  0.,  0.],
       [ 0.,  1.,  0.],
       [ 0.,  0.,  1.]])
setflags(write=None, align=None, uic=None)

Set array flags WRITEABLE, ALIGNED, and UPDATEIFCOPY, respectively.

These Boolean-valued flags affect how numpy interprets the memory area used by a (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The UPDATEIFCOPY flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.)

Parameters:

write : bool, optional

Describes whether or not a can be written to.

align : bool, optional

Describes whether or not a is aligned properly for its type.

uic : bool, optional

Describes whether or not a is a copy of another “base” array.

Notes

Array flags provide information about how the memory area used for the array is to be interpreted. There are 6 Boolean flags in use, only three of which can be changed by the user: UPDATEIFCOPY, WRITEABLE, and ALIGNED.

WRITEABLE (W) the data area can be written to;

ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler);

UPDATEIFCOPY (U) this array is a copy of some other array (referenced by .base). When this array is deallocated, the base array will be updated with the contents of this array.

All flags can be accessed using their first (upper case) letter as well as the full name.

Examples

>>> y
array([[3, 1, 7],
       [2, 0, 0],
       [8, 5, 9]])
>>> y.flags
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False
>>> y.setflags(write=0, align=0)
>>> y.flags
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  WRITEABLE : False
  ALIGNED : False
  UPDATEIFCOPY : False
>>> y.setflags(uic=1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: cannot set UPDATEIFCOPY flag to True
sort(axis=-1, kind='quicksort', order=None)

Sort an array, in-place.

Parameters:

axis : int, optional

Axis along which to sort. Default is -1, which means sort along the last axis.

kind : {‘quicksort’, ‘mergesort’, ‘heapsort’}, optional

Sorting algorithm. Default is ‘quicksort’.

order : str or list of str, optional

When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties.

See also

numpy.sort
Return a sorted copy of an array.
argsort
Indirect sort.
lexsort
Indirect stable sort on multiple keys.
searchsorted
Find elements in sorted array.
partition
Partial sort.

Notes

See sort for notes on the different sorting algorithms.

Examples

>>> a = np.array([[1,4], [3,1]])
>>> a.sort(axis=1)
>>> a
array([[1, 4],
       [1, 3]])
>>> a.sort(axis=0)
>>> a
array([[1, 3],
       [1, 4]])

Use the order keyword to specify a field to use when sorting a structured array:

>>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)])
>>> a.sort(order='y')
>>> a
array([('c', 1), ('a', 2)],
      dtype=[('x', '|S1'), ('y', '<i4')])
squeeze(axis=None)

Remove single-dimensional entries from the shape of a.

Refer to numpy.squeeze for full documentation.

See also

numpy.squeeze
equivalent function
std(axis=None, dtype=None, out=None, ddof=0, keepdims=False)

Returns the standard deviation of the array elements along given axis.

Refer to numpy.std for full documentation.

See also

numpy.std
equivalent function
sum(axis=None, dtype=None, out=None, keepdims=False)

Return the sum of the array elements over the given axis.

Refer to numpy.sum for full documentation.

See also

numpy.sum
equivalent function
swapaxes(axis1, axis2)

Return a view of the array with axis1 and axis2 interchanged.

Refer to numpy.swapaxes for full documentation.

See also

numpy.swapaxes
equivalent function
take(indices, axis=None, out=None, mode='raise')

Return an array formed from the elements of a at the given indices.

Refer to numpy.take for full documentation.

See also

numpy.take
equivalent function
to(unit, equivalencies=[])

Returns a new Quantity object with the specified units.

Parameters:

unit : UnitBase instance, str

An object that represents the unit to convert to. Must be an UnitBase object or a string parseable by the units package.

equivalencies : list of equivalence pairs, optional

A list of equivalence pairs to try if the units are not directly convertible. See Equivalencies. If not provided or [], class default equivalencies will be used (none for Quantity, but may be set for subclasses) If None, no equivalencies will be applied at all, not even any set globally or within a context.

to_dqflags(bits=None, minlen=1, dtype=<type 'float'>, round=False)[source]

Convert this StateVector into a DataQualityDict

The StateTimeSeries for each bit is converted into a DataQualityFlag with the bits combined into a dict.

Parameters:

minlen : int, optional, default: 1

minimum number of consecutive True values to identify as a Segment. This is useful to ignore single bit flips, for example.

bits : list, optional

a list of bit indices or bit names to select, defaults to bits

Returns:

DataQualityFlag list : list

a list of DataQualityFlag reprensentations for each bit in this StateVector

See also

StateTimeSeries.to_dqflag()
for details on the segment representation method for StateVector bits
to_hdf5(*args, **kwargs)[source]

Convert this array to a h5py.Dataset.

This method has been deprecated in favour of the unified I/O method:

Class.write(..., format=’hdf5’)

to_lal(*args, **kwargs)[source]

Bogus function inherited from superclass, do not use.

to_pycbc(*args, **kwargs)[source]

Convert this TimeSeries into a PyCBC TimeSeries

Parameters:

copy : bool, optional, default: True

if True, copy these data to a new array

Returns:

timeseries : TimeSeries

a PyCBC representation of this TimeSeries

tobytes(order='C')

Construct Python bytes containing the raw data bytes in the array.

Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object can be produced in either ‘C’ or ‘Fortran’, or ‘Any’ order (the default is ‘C’-order). ‘Any’ order means C-order unless the F_CONTIGUOUS flag in the array is set, in which case it means ‘Fortran’ order.

New in version 1.9.0.

Parameters:

order : {‘C’, ‘F’, None}, optional

Order of the data for multidimensional arrays: C, Fortran, or the same as for the original array.

Returns:

s : bytes

Python bytes exhibiting a copy of a‘s raw data.

Examples

>>> x = np.array([[0, 1], [2, 3]])
>>> x.tobytes()
b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00'
>>> x.tobytes('C') == x.tobytes()
True
>>> x.tobytes('F')
b'\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00'
tofile(fid, sep="", format="%s")

Write array to a file as text or binary (default).

Data is always written in ‘C’ order, independent of the order of a. The data produced by this method can be recovered using the function fromfile().

Parameters:

fid : file or str

An open file object, or a string containing a filename.

sep : str

Separator between array items for text output. If “” (empty), a binary file is written, equivalent to file.write(a.tobytes()).

format : str

Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item.

Notes

This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size.

tolist()

Return the array as a (possibly nested) list.

Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible Python type.

Parameters:

none :

Returns:

y : list

The possibly nested list of array elements.

Notes

The array may be recreated, a = np.array(a.tolist()).

Examples

>>> a = np.array([1, 2])
>>> a.tolist()
[1, 2]
>>> a = np.array([[1, 2], [3, 4]])
>>> list(a)
[array([1, 2]), array([3, 4])]
>>> a.tolist()
[[1, 2], [3, 4]]
tostring(order='C')[source]

Construct Python bytes containing the raw data bytes in the array.

Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object can be produced in either ‘C’ or ‘Fortran’, or ‘Any’ order (the default is ‘C’-order). ‘Any’ order means C-order unless the F_CONTIGUOUS flag in the array is set, in which case it means ‘Fortran’ order.

This function is a compatibility alias for tobytes. Despite its name it returns bytes not strings.

Parameters:

order : {‘C’, ‘F’, None}, optional

Order of the data for multidimensional arrays: C, Fortran, or the same as for the original array.

Returns:

s : bytes

Python bytes exhibiting a copy of a‘s raw data.

Examples

>>> x = np.array([[0, 1], [2, 3]])
>>> x.tobytes()
b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00'
>>> x.tobytes('C') == x.tobytes()
True
>>> x.tobytes('F')
b'\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00'
trace(offset=0, axis1=0, axis2=1, dtype=None, out=None)

Return the sum along diagonals of the array.

Refer to numpy.trace for full documentation.

See also

numpy.trace
equivalent function
transpose(*axes)

Returns a view of the array with axes transposed.

For a 1-D array, this has no effect. (To change between column and row vectors, first cast the 1-D array into a matrix object.) For a 2-D array, this is the usual matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and a.shape = (i[0], i[1], ... i[n-2], i[n-1]), then a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0]).

Parameters:

axes : None, tuple of ints, or n ints

  • None or no argument: reverses the order of the axes.
  • tuple of ints: i in the j-th place in the tuple means a‘s i-th axis becomes a.transpose()‘s j-th axis.
  • n ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form)
Returns:

out : ndarray

View of a, with axes suitably permuted.

See also

ndarray.T
Array property returning the array transposed.

Examples

>>> a = np.array([[1, 2], [3, 4]])
>>> a
array([[1, 2],
       [3, 4]])
>>> a.transpose()
array([[1, 3],
       [2, 4]])
>>> a.transpose((1, 0))
array([[1, 3],
       [2, 4]])
>>> a.transpose(1, 0)
array([[1, 3],
       [2, 4]])
update(other, inplace=True)[source]

Update this series by appending new data from an other and dropping the same amount of data off the start.

This is a convenience method that just calls append with resize=False.

value_at(x)[source]

Return the value of this Series at the given xindex value

Parameters:

x : float, Quantity

the xindex value at which to search

Returns:

y : Quantity

the value of this Series at the given xindex value

var(axis=None, dtype=None, out=None, ddof=0, keepdims=False)

Returns the variance of the array elements, along given axis.

Refer to numpy.var for full documentation.

See also

numpy.var
equivalent function
view(dtype=None, type=None)

New view of array with the same data.

Parameters:

dtype : data-type or ndarray sub-class, optional

Data-type descriptor of the returned view, e.g., float32 or int16. The default, None, results in the view having the same data-type as a. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the type parameter).

type : Python type, optional

Type of the returned view, e.g., ndarray or matrix. Again, the default None results in type preservation.

Notes

a.view() is used two different ways:

a.view(some_dtype) or a.view(dtype=some_dtype) constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory.

a.view(ndarray_subclass) or a.view(type=ndarray_subclass) just returns an instance of ndarray_subclass that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory.

For a.view(some_dtype), if some_dtype has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of a (shown by print(a)). It also depends on exactly how a is stored in memory. Therefore if a is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results.

Examples

>>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)])

Viewing array data using a different type and dtype:

>>> y = x.view(dtype=np.int16, type=np.matrix)
>>> y
matrix([[513]], dtype=int16)
>>> print(type(y))
<class 'numpy.matrixlib.defmatrix.matrix'>

Creating a view on a structured array so it can be used in calculations

>>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)])
>>> xv = x.view(dtype=np.int8).reshape(-1,2)
>>> xv
array([[1, 2],
       [3, 4]], dtype=int8)
>>> xv.mean(0)
array([ 2.,  3.])

Making changes to the view changes the underlying array

>>> xv[0,1] = 20
>>> print(x)
[(1, 20) (3, 4)]

Using a view to convert an array to a recarray:

>>> z = x.view(np.recarray)
>>> z.a
array([1], dtype=int8)

Views share data:

>>> x[0] = (9, 10)
>>> z[0]
(9, 10)

Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.:

>>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16)
>>> y = x[:, 0:2]
>>> y
array([[1, 2],
       [4, 5]], dtype=int16)
>>> y.view(dtype=[('width', np.int16), ('length', np.int16)])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: new type not compatible with array.
>>> z = y.copy()
>>> z.view(dtype=[('width', np.int16), ('length', np.int16)])
array([[(1, 2)],
       [(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')])
write(data, *args, **kwargs)

Write this StateVector to a file

Parameters:

outfile : str

path of output file

Notes

The available built-in formats are:

Format Read Write Auto-identify
csv Yes Yes Yes
framecpp Yes Yes No
hdf Yes Yes No
hdf5 Yes Yes Yes
txt Yes Yes Yes
zip()[source]

Zip the xindex and value arrays of this Series

Returns:

stacked : 2-d numpy.ndarray

The array formed by stacking the the xindex and value of this series

Examples

>>> a = Series([0, 2, 4, 6, 8], xindex=[-5, -4, -3, -2, -1])
>>> a.zip()
array([[-5.,  0.],
       [-4.,  2.],
       [-3.,  4.],
       [-2.,  6.],
       [-1.,  8.]])
class gwpy.timeseries.StateTimeSeries[source]

Bases: gwpy.timeseries.core.TimeSeriesBase

Boolean array representing a good/bad state determination of some data.

Parameters:

value : array-like

input data array

unit : Unit, optional

physical unit of these data

epoch : LIGOTimeGPS, float, str

GPS epoch associated with these data, any input parsable by to_gps is fine

sample_rate : float, Quantity, optional, default: 1

the rate of samples per second (Hertz)

times : array-like

the complete array of GPS times accompanying the data for this series. This argument takes precedence over epoch and sample_rate so should be given in place of these if relevant, not alongside

name : str, optional, default: None

descriptive title for this array

channel : Channel, str

source data stream for these data

dtype : dtype, optional, default: None

input data type

copy : bool, optional, default: False

choose to copy the input data to new memory

subok : bool, optional, default: True

allow passing of sub-classes by the array generator

Notes

The input data array is cast to the bool data type upon creation of this series.

Attributes Summary

T Same as self.transpose(), except that self is returned if self.ndim < 2.
base Base object if memory is from some other object.
cgs Returns a copy of the current Quantity instance with CGS units.
channel Instrumental channel associated with these data
ctypes An object to simplify the interaction of the array with the ctypes module.
data
dt X-axis sample separation
dtype Data-type of the array’s elements.
duration Duration of this series in seconds
dx X-axis sample separation
epoch GPS epoch for these data.
equivalencies A list of equivalencies that will be applied by default during unit conversions.
flags Information about the memory layout of the array.
flat A 1-D iterator over the Quantity array.
imag The imaginary part of the array.
info Container for meta information like name, description, format.
isscalar True if the value of this quantity is a scalar, or False if it is an array-like object.
itemsize Length of one array element in bytes.
name Name for this data set
nbytes Total bytes consumed by the elements of the array.
ndim Number of array dimensions.
real The real part of the array.
sample_rate Data rate for this TimeSeries in samples per second (Hertz).
shape Tuple of array dimensions.
si Returns a copy of the current Quantity instance with SI units.
size Number of elements in the array.
span X-axis [low, high) segment encompassed by these data
strides Tuple of bytes to step in each dimension when traversing an array.
times Series of GPS times for each sample
unit The physical unit of these data
value The numerical value of this quantity.
x0 X-axis value of the first data point
xindex Positions of the data on the x-axis
xspan X-axis [low, high) segment encompassed by these data
xunit Unit of x-axis index

Methods Summary

all([axis, out, keepdims]) Returns True if all elements evaluate to True.
any([axis, out, keepdims]) Returns True if any of the elements of a evaluate to True.
append(other[, gap, inplace, pad, resize]) Connect another series onto the end of the current one.
argmax([axis, out]) Return indices of the maximum values along the given axis.
argmin([axis, out]) Return indices of the minimum values along the given axis of a.
argpartition(kth[, axis, kind, order]) Returns the indices that would partition this array.
argsort([axis, kind, order]) Returns the indices that would sort this array.
astype(dtype[, order, casting, subok, copy]) Copy of the array, cast to a specified type.
byteswap(inplace) Swap the bytes of the array elements
choose(choices[, out, mode]) Use an index array to construct a new array from a set of choices.
clip([min, max, out]) Return an array whose values are limited to [min, max].
compress(condition[, axis, out]) Return selected slices of this array along given axis.
conj() Complex-conjugate all elements.
conjugate() Return the complex conjugate, element-wise.
copy([order]) Return a copy of the array.
copy_metadata() Return a deepcopy of the metadata for this array
crop([start, end, copy]) Crop this series to the given x-axis extent.
cumprod([axis, dtype, out]) Return the cumulative product of the elements along the given axis.
cumsum([axis, dtype, out]) Return the cumulative sum of the elements along the given axis.
decompose([bases]) Generates a new Quantity with the units decomposed.
diagonal([offset, axis1, axis2]) Return specified diagonals.
diff([n, axis]) Calculate the n-th discrete difference along given axis.
dot(b[, out]) Dot product of two arrays.
dump(file) Dump a pickle of the array to the specified file.
dumps() Returns the pickle of the array as a string.
ediff1d([to_end, to_begin])
fetch(*args, **kwargs) Fetch data from NDS into a TimeSeries.
fetch_open_data(ifo, start, end[, name, ...]) Fetch open-access data from the LIGO Open Science Center
fill(value) Fill the array with a scalar value.
find(channel, start, end[, frametype, pad, ...]) Find and read data from frames for a channel
flatten([order]) Return a copy of the array collapsed into one dimension.
from_hdf5(*args, **kwargs) Read an array from the given HDF file.
from_lal(*args, **kwargs) Generate a new TimeSeries from a LAL TimeSeries of any type.
from_nds2_buffer(*args, **kwargs) Construct a new TimeSeries from an nds2.buffer object
from_pycbc(ts) Convert a pycbc.types.timeseries.TimeSeries into a TimeSeries
get(channel, start, end[, pad, dtype, verbose]) Get data for this channel from frames or NDS
getfield(dtype[, offset]) Returns a field of the given array as a certain type.
insert(obj, values[, axis]) Insert values along the given axis before the given indices and return a new Quantity object.
is_compatible(other) Check whether this series and other have compatible metadata
is_contiguous(other[, tol]) Check whether other is contiguous with self.
item(*args) Copy an element of an array to a standard Python scalar and return it.
itemset(*args) Insert scalar into an array (scalar is cast to array’s dtype, if possible)
max([axis, out]) Return the maximum along a given axis.
mean([axis, dtype, out, keepdims]) Returns the average of the array elements along given axis.
median([axis]) Compute the median along the specified axis.
min([axis, out, keepdims]) Return the minimum along a given axis.
nansum([axis, out, keepdims])
newbyteorder([new_order]) Return the array with the same data viewed with a different byte order.
nonzero() Return the indices of the elements that are non-zero.
override_unit(unit[, parse_strict]) Forcefully reset the unit of these data
pad(pad_width, **kwargs) Pad this series to a new size
partition(kth[, axis, kind, order]) Rearranges the elements in the array in such a way that value of the element in kth position is in the position it would be in a sorted array.
plot(**kwargs) Plot the data for this TimeSeries
prepend(other[, gap, inplace, pad, resize]) Connect another series onto the start of the current one.
prod([axis, dtype, out, keepdims]) Return the product of the array elements over the given axis
ptp([axis, out]) Peak to peak (maximum - minimum) value along a given axis.
put(indices, values[, mode]) Set a.flat[n] = values[n] for all n in indices.
ravel([order]) Return a flattened array.
read(*args, **kwargs) Read in data
repeat(repeats[, axis]) Repeat elements of an array.
reshape(shape[, order]) Returns an array containing the same data with a new shape.
resize(new_shape[, refcheck]) Change shape and size of array in-place.
round([decimals, out]) Return a with each element rounded to the given number of decimals.
searchsorted(v[, side, sorter]) Find indices where elements of v should be inserted in a to maintain order.
setfield(val, dtype[, offset]) Put a value into a specified place in a field defined by a data-type.
setflags([write, align, uic]) Set array flags WRITEABLE, ALIGNED, and UPDATEIFCOPY, respectively.
sort([axis, kind, order]) Sort an array, in-place.
squeeze([axis]) Remove single-dimensional entries from the shape of a.
std([axis, dtype, out, ddof, keepdims]) Returns the standard deviation of the array elements along given axis.
sum([axis, dtype, out, keepdims]) Return the sum of the array elements over the given axis.
swapaxes(axis1, axis2) Return a view of the array with axis1 and axis2 interchanged.
take(indices[, axis, out, mode]) Return an array formed from the elements of a at the given indices.
to(unit[, equivalencies]) Returns a new Quantity object with the specified units.
to_dqflag([name, minlen, dtype, round, ...]) Convert this StateTimeSeries into a
to_hdf5(*args, **kwargs) Convert this array to a h5py.Dataset.
to_lal(*args, **kwargs) Bogus function inherited from superclass, do not use.
to_pycbc(*args, **kwargs) Convert this TimeSeries into a PyCBC
tobytes([order]) Construct Python bytes containing the raw data bytes in the array.
tofile(fid[, sep, format]) Write array to a file as text or binary (default).
tolist() Return the array as a (possibly nested) list.
tostring([order]) Construct Python bytes containing the raw data bytes in the array.
trace([offset, axis1, axis2, dtype, out]) Return the sum along diagonals of the array.
transpose(*axes) Returns a view of the array with axes transposed.
update(other[, inplace]) Update this series by appending new data from an other and dropping the same amount of data off the start.
value_at(x) Return the value of this Series at the given xindex value
var([axis, dtype, out, ddof, keepdims]) Returns the variance of the array elements, along given axis.
view([dtype, type]) New view of array with the same data.
write(data, *args, **kwargs) Write out data
zip() Zip the xindex and value arrays of this Series

Attributes Documentation

T

Same as self.transpose(), except that self is returned if self.ndim < 2.

Examples

>>> x = np.array([[1.,2.],[3.,4.]])
>>> x
array([[ 1.,  2.],
       [ 3.,  4.]])
>>> x.T
array([[ 1.,  3.],
       [ 2.,  4.]])
>>> x = np.array([1.,2.,3.,4.])
>>> x
array([ 1.,  2.,  3.,  4.])
>>> x.T
array([ 1.,  2.,  3.,  4.])
base

Base object if memory is from some other object.

Examples

The base of an array that owns its memory is None:

>>> x = np.array([1,2,3,4])
>>> x.base is None
True

Slicing creates a view, whose memory is shared with x:

>>> y = x[2:]
>>> y.base is x
True
cgs

Returns a copy of the current Quantity instance with CGS units. The value of the resulting object will be scaled.

channel

Instrumental channel associated with these data

Type:Channel
ctypes

An object to simplify the interaction of the array with the ctypes module.

This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library.

Parameters:

None :

Returns:

c : Python object

Possessing attributes data, shape, strides, etc.

See also

numpy.ctypeslib

Notes

Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes):

  • data: A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as self._array_interface_[‘data’][0].
  • shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to dtype(‘p’) on this platform. This base-type could be c_int, c_long, or c_longlong depending on the platform. The c_intp type is defined accordingly in numpy.ctypeslib. The ctypes array contains the shape of the underlying array.
  • strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array.
  • data_as(obj): Return the data pointer cast to a particular c-types object. For example, calling self._as_parameter_ is equivalent to self.data_as(ctypes.c_void_p). Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: self.data_as(ctypes.POINTER(ctypes.c_double)).
  • shape_as(obj): Return the shape tuple as an array of some other c-types type. For example: self.shape_as(ctypes.c_short).
  • strides_as(obj): Return the strides tuple as an array of some other c-types type. For example: self.strides_as(ctypes.c_longlong).

Be careful using the ctypes attribute - especially on temporary arrays or arrays constructed on the fly. For example, calling (a+b).ctypes.data_as(ctypes.c_void_p) returns a pointer to memory that is invalid because the array created as (a+b) is deallocated before the next Python statement. You can avoid this problem using either c=a+b or ct=(a+b).ctypes. In the latter case, ct will hold a reference to the array until ct is deleted or re-assigned.

If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the as parameter attribute which will return an integer equal to the data attribute.

Examples

>>> import ctypes
>>> x
array([[0, 1],
       [2, 3]])
>>> x.ctypes.data
30439712
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long))
<ctypes.LP_c_long object at 0x01F01300>
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_long)).contents
c_long(0)
>>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_longlong)).contents
c_longlong(4294967296L)
>>> x.ctypes.shape
<numpy.core._internal.c_long_Array_2 object at 0x01FFD580>
>>> x.ctypes.shape_as(ctypes.c_long)
<numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
>>> x.ctypes.strides
<numpy.core._internal.c_long_Array_2 object at 0x01FCE620>
>>> x.ctypes.strides_as(ctypes.c_longlong)
<numpy.core._internal.c_longlong_Array_2 object at 0x01F01300>
data
dt

X-axis sample separation

Type:Quantity scalar
dtype

Data-type of the array’s elements.

Parameters:None :
Returns:d : numpy dtype object

See also

numpy.dtype

Examples

>>> x
array([[0, 1],
       [2, 3]])
>>> x.dtype
dtype('int32')
>>> type(x.dtype)
<type 'numpy.dtype'>
duration

Duration of this series in seconds

dx

X-axis sample separation

Type:Quantity scalar
epoch

GPS epoch for these data.

This attribute is stored internally by the x0 attribute

Type:Time
equivalencies

A list of equivalencies that will be applied by default during unit conversions.

flags

Information about the memory layout of the array.

Notes

The flags object can be accessed dictionary-like (as in a.flags['WRITEABLE']), or by using lowercased attribute names (as in a.flags.writeable). Short flag names are only supported in dictionary access.

Only the UPDATEIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling ndarray.setflags.

The array flags cannot be set arbitrarily:

  • UPDATEIFCOPY can only be set False.
  • ALIGNED can only be set True if the data is truly aligned.
  • WRITEABLE can only be set True if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string.

Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays.

Even for contiguous arrays a stride for a given dimension arr.strides[dim] may be arbitrary if arr.shape[dim] == 1 or the array has no elements. It does not generally hold that self.strides[-1] == self.itemsize for C-style contiguous arrays or self.strides[0] == self.itemsize for Fortran-style contiguous arrays is true.

Attributes

flat

A 1-D iterator over the Quantity array.

This returns a QuantityIterator instance, which behaves the same as the flatiter instance returned by flat, and is similar to, but not a subclass of, Python’s built-in iterator object.

imag

The imaginary part of the array.

Examples

>>> x = np.sqrt([1+0j, 0+1j])
>>> x.imag
array([ 0.        ,  0.70710678])
>>> x.imag.dtype
dtype('float64')
info

Container for meta information like name, description, format. This is required when the object is used as a mixin column within a table, but can be used as a general way to store meta information.

isscalar

True if the value of this quantity is a scalar, or False if it is an array-like object.

Note

This is subtly different from numpy.isscalar in that numpy.isscalar returns False for a zero-dimensional array (e.g. np.array(1)), while this is True for quantities, since quantities cannot represent true numpy scalars.

itemsize

Length of one array element in bytes.

Examples

>>> x = np.array([1,2,3], dtype=np.float64)
>>> x.itemsize
8
>>> x = np.array([1,2,3], dtype=np.complex128)
>>> x.itemsize
16
name

Name for this data set

Type:str
nbytes

Total bytes consumed by the elements of the array.

Notes

Does not include memory consumed by non-element attributes of the array object.

Examples

>>> x = np.zeros((3,5,2), dtype=np.complex128)
>>> x.nbytes
480
>>> np.prod(x.shape) * x.itemsize
480
ndim

Number of array dimensions.

Examples

>>> x = np.array([1, 2, 3])
>>> x.ndim
1
>>> y = np.zeros((2, 3, 4))
>>> y.ndim
3
real

The real part of the array.

See also

numpy.real
equivalent function

Examples

>>> x = np.sqrt([1+0j, 0+1j])
>>> x.real
array([ 1.        ,  0.70710678])
>>> x.real.dtype
dtype('float64')
sample_rate

Data rate for this TimeSeries in samples per second (Hertz).

This attribute is stored internally by the dx attribute

Type:Quantity scalar
shape

Tuple of array dimensions.

Notes

May be used to “reshape” the array, as long as this would not require a change in the total number of elements

Examples

>>> x = np.array([1, 2, 3, 4])
>>> x.shape
(4,)
>>> y = np.zeros((2, 3, 4))
>>> y.shape
(2, 3, 4)
>>> y.shape = (3, 8)
>>> y
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.]])
>>> y.shape = (3, 6)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: total size of new array must be unchanged
si

Returns a copy of the current Quantity instance with SI units. The value of the resulting object will be scaled.

size

Number of elements in the array.

Equivalent to np.prod(a.shape), i.e., the product of the array’s dimensions.

Examples

>>> x = np.zeros((3, 5, 2), dtype=np.complex128)
>>> x.size
30
>>> np.prod(x.shape)
30
span

X-axis [low, high) segment encompassed by these data

Type:Segment
strides

Tuple of bytes to step in each dimension when traversing an array.

The byte offset of element (i[0], i[1], ..., i[n]) in an array a is:

offset = sum(np.array(i) * a.strides)

A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide.

See also

numpy.lib.stride_tricks.as_strided

Notes

Imagine an array of 32-bit integers (each 4 bytes):

x = np.array([[0, 1, 2, 3, 4],
              [5, 6, 7, 8, 9]], dtype=np.int32)

This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array x will be (20, 4).

Examples

>>> y = np.reshape(np.arange(2*3*4), (2,3,4))
>>> y
array([[[ 0,  1,  2,  3],
        [ 4,  5,  6,  7],
        [ 8,  9, 10, 11]],
       [[12, 13, 14, 15],
        [16, 17, 18, 19],
        [20, 21, 22, 23]]])
>>> y.strides
(48, 16, 4)
>>> y[1,1,1]
17
>>> offset=sum(y.strides * np.array((1,1,1)))
>>> offset/y.itemsize
17
>>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0)
>>> x.strides
(32, 4, 224, 1344)
>>> i = np.array([3,5,2,2])
>>> offset = sum(i * x.strides)
>>> x[3,5,2,2]
813
>>> offset / x.itemsize
813
times

Series of GPS times for each sample

unit

The physical unit of these data

Type:UnitBase
value

The numerical value of this quantity.

x0

X-axis value of the first data point

Type:Quantity scalar
xindex

Positions of the data on the x-axis

Type:Quantity array
xspan

X-axis [low, high) segment encompassed by these data

Type:Segment
xunit

Unit of x-axis index

Type:Unit

Methods Documentation

all(axis=None, out=None, keepdims=False)

Returns True if all elements evaluate to True.

Refer to numpy.all for full documentation.

See also

numpy.all
equivalent function
any(axis=None, out=None, keepdims=False)

Returns True if any of the elements of a evaluate to True.

Refer to numpy.any for full documentation.

See also

numpy.any
equivalent function
append(other, gap='raise', inplace=True, pad=0.0, resize=True)[source]

Connect another series onto the end of the current one.

Parameters:

other : Series

another series of the same type to connect to this one

gap : str, optional, default: 'raise'

action to perform if there’s a gap between the other series and this one. One of

  • 'raise' - raise an Exception
    • 'ignore' - remove gap and join data
  • 'pad' - pad gap with zeros

inplace : bool, optional, default: True

perform operation in-place, modifying current Series, otherwise copy data and return new Series

Warning

inplace append bypasses the reference check in numpy.ndarray.resize, so be carefully to only use this for arrays that haven’t been sharing their memory!

pad : float, optional, default: 0.0

value with which to pad discontiguous series

resize : bool, optional, default: True

resize this array to accommodate new data, otherwise shift the old data to the left (potentially falling off the start) and put the new data in at the end

Returns:

series : Series

a new series containing joined data sets

argmax(axis=None, out=None)

Return indices of the maximum values along the given axis.

Refer to numpy.argmax for full documentation.

See also

numpy.argmax
equivalent function
argmin(axis=None, out=None)

Return indices of the minimum values along the given axis of a.

Refer to numpy.argmin for detailed documentation.

See also

numpy.argmin
equivalent function
argpartition(kth, axis=-1, kind='introselect', order=None)

Returns the indices that would partition this array.

Refer to numpy.argpartition for full documentation.

New in version 1.8.0.

See also

numpy.argpartition
equivalent function
argsort(axis=-1, kind='quicksort', order=None)

Returns the indices that would sort this array.

Refer to numpy.argsort for full documentation.

See also

numpy.argsort
equivalent function
astype(dtype, order='K', casting='unsafe', subok=True, copy=True)

Copy of the array, cast to a specified type.

Parameters:

dtype : str or dtype

Typecode or data-type to which the array is cast.

order : {‘C’, ‘F’, ‘A’, ‘K’}, optional

Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’.

casting : {‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional

Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility.

  • ‘no’ means the data types should not be cast at all.
  • ‘equiv’ means only byte-order changes are allowed.
  • ‘safe’ means only casts which can preserve values are allowed.
  • ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed.
  • ‘unsafe’ means any data conversions may be done.

subok : bool, optional

If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array.

copy : bool, optional

By default, astype always returns a newly allocated array. If this is set to false, and the dtype, order, and subok requirements are satisfied, the input array is returned instead of a copy.

Returns:

arr_t : ndarray

Unless copy is False and the other conditions for returning the input array are satisfied (see description for copy input parameter), arr_t is a new array of the same shape as the input array, with dtype, order given by dtype, order.

Raises:

ComplexWarning :

When casting from complex to float or int. To avoid this, one should use a.real.astype(t).

Notes

Starting in NumPy 1.9, astype method now returns an error if the string dtype to cast to is not long enough in ‘safe’ casting mode to hold the max value of integer/float array that is being casted. Previously the casting was allowed even if the result was truncated.

Examples

>>> x = np.array([1, 2, 2.5])
>>> x
array([ 1. ,  2. ,  2.5])
>>> x.astype(int)
array([1, 2, 2])
byteswap(inplace)

Swap the bytes of the array elements

Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place.

Parameters:

inplace : bool, optional

If True, swap bytes in-place, default is False.

Returns:

out : ndarray

The byteswapped array. If inplace is True, this is a view to self.

Examples

>>> A = np.array([1, 256, 8755], dtype=np.int16)
>>> map(hex, A)
['0x1', '0x100', '0x2233']
>>> A.byteswap(True)
array([  256,     1, 13090], dtype=int16)
>>> map(hex, A)
['0x100', '0x1', '0x3322']

Arrays of strings are not swapped

>>> A = np.array(['ceg', 'fac'])
>>> A.byteswap()
array(['ceg', 'fac'],
      dtype='|S3')
choose(choices, out=None, mode='raise')

Use an index array to construct a new array from a set of choices.

Refer to numpy.choose for full documentation.

See also

numpy.choose
equivalent function
clip(min=None, max=None, out=None)

Return an array whose values are limited to [min, max]. One of max or min must be given.

Refer to numpy.clip for full documentation.

See also

numpy.clip
equivalent function
compress(condition, axis=None, out=None)

Return selected slices of this array along given axis.

Refer to numpy.compress for full documentation.

See also

numpy.compress
equivalent function
conj()

Complex-conjugate all elements.

Refer to numpy.conjugate for full documentation.

See also

numpy.conjugate
equivalent function
conjugate()

Return the complex conjugate, element-wise.

Refer to numpy.conjugate for full documentation.

See also

numpy.conjugate
equivalent function
copy(order='C')[source]

Return a copy of the array.

Parameters:

order : {‘C’, ‘F’, ‘A’, ‘K’}, optional

Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if a is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of a as closely as possible. (Note that this function and :func:numpy.copy are very similar, but have different default values for their order= arguments.)

Examples

>>> x = np.array([[1,2,3],[4,5,6]], order='F')
>>> y = x.copy()
>>> x.fill(0)
>>> x
array([[0, 0, 0],
       [0, 0, 0]])
>>> y
array([[1, 2, 3],
       [4, 5, 6]])
>>> y.flags['C_CONTIGUOUS']
True
copy_metadata()[source]

Return a deepcopy of the metadata for this array

crop(start=None, end=None, copy=False)[source]

Crop this series to the given x-axis extent.

Parameters:

start : float, optional

lower limit of x-axis to crop to, defaults to current x0

end : float, optional

upper limit of x-axis to crop to, defaults to current series end

copy : bool, optional, default: False

copy the input data to fresh memory, otherwise return a view

Returns:

series : Series

A new series with a sub-set of the input data

Notes

If either start or end are outside of the original Series span, warnings will be printed and the limits will be restricted to the xspan

cumprod(axis=None, dtype=None, out=None)

Return the cumulative product of the elements along the given axis.

Refer to numpy.cumprod for full documentation.

See also

numpy.cumprod
equivalent function
cumsum(axis=None, dtype=None, out=None)

Return the cumulative sum of the elements along the given axis.

Refer to numpy.cumsum for full documentation.

See also

numpy.cumsum
equivalent function
decompose(bases=[])

Generates a new Quantity with the units decomposed. Decomposed units have only irreducible units in them (see astropy.units.UnitBase.decompose).

Parameters:

bases : sequence of UnitBase, optional

The bases to decompose into. When not provided, decomposes down to any irreducible units. When provided, the decomposed result will only contain the given units. This will raises a UnitsError if it’s not possible to do so.

Returns:

newq : Quantity

A new object equal to this quantity with units decomposed.

diagonal(offset=0, axis1=0, axis2=1)

Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed.

Refer to numpy.diagonal() for full documentation.

See also

numpy.diagonal
equivalent function
diff(n=1, axis=-1)[source]

Calculate the n-th discrete difference along given axis.

The first difference is given by out[n] = a[n+1] - a[n] along the given axis, higher differences are calculated by using diff recursively.

Parameters:

a : array_like

Input array

n
: int, optional

The number of times values are differenced.

axis
: int, optional

The axis along which the difference is taken, default is the last axis.

Returns:

diff : ndarray

The n-th differences. The shape of the output is the same as a except along axis where the dimension is smaller by n.

. :

Examples

>>> x = np.array([1, 2, 4, 7, 0])
>>> np.diff(x)
array([ 1,  2,  3, -7])
>>> np.diff(x, n=2)
array([  1,   1, -10])
>>> x = np.array([[1, 3, 6, 10], [0, 5, 6, 8]])
>>> np.diff(x)
array([[2, 3, 4],
       [5, 1, 2]])
>>> np.diff(x, axis=0)
array([[-1,  2,  0, -2]])
dot(b, out=None)

Dot product of two arrays.

Refer to numpy.dot for full documentation.

See also

numpy.dot
equivalent function

Examples

>>> a = np.eye(2)
>>> b = np.ones((2, 2)) * 2
>>> a.dot(b)
array([[ 2.,  2.],
       [ 2.,  2.]])

This array method can be conveniently chained:

>>> a.dot(b).dot(b)
array([[ 8.,  8.],
       [ 8.,  8.]])
dump(file)

Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load.

Parameters:

file : str

A string naming the dump file.

dumps()[source]

Returns the pickle of the array as a string. pickle.loads or numpy.loads will convert the string back to an array.

Parameters:None :
ediff1d(to_end=None, to_begin=None)
fetch(*args, **kwargs)[source]

Fetch data from NDS into a TimeSeries.

Parameters:

channel : str, Channel

the name of the channel to read, or a Channel object.

start : LIGOTimeGPS, float, str

GPS start time of required data, any input parseable by to_gps is fine

end : LIGOTimeGPS, float, str

GPS end time of required data, any input parseable by to_gps is fine

host : str, optional

URL of NDS server to use, defaults to observatory site host

port : int, optional

port number for NDS server query, must be given with host

verify : bool, optional, default: True

check channels exist in database before asking for data

connection : NDS2Connection

open NDS connection to use

verbose : bool, optional

print verbose output about NDS progress

type : int, optional

NDS2 channel type integer

dtype : type, numpy.dtype, str, optional

identifier for desired output data type

fetch_open_data(ifo, start, end, name='strain/Strain', sample_rate=4096, host='https://losc.ligo.org')[source]

Fetch open-access data from the LIGO Open Science Center

Parameters:

ifo : str

the two-character prefix of the IFO in which you are interested, e.g. 'L1'

start : LIGOTimeGPS, float, str, optional

GPS start time of required data, defaults to start of data found; any input parseable by to_gps is fine

end : LIGOTimeGPS, float, str, optional

GPS end time of required data, defaults to end of data found; any input parseable by to_gps is fine

name : str, optional

the full name of HDF5 dataset that represents the data you want, e.g. 'strain/Strain' for _h(t)_ data, or 'quality/simple' for basic data-quality information

sample_rate : float, optional, default: 4096

the sample rate of desired data. Most data are stored by LOSC at 4096 Hz, however there may be event-related data releases with a 16384 Hz rate

host : str, optional

HTTP host name of LOSC server to access

fill(value)

Fill the array with a scalar value.

Parameters:

value : scalar

All elements of a will be assigned this value.

Examples

>>> a = np.array([1, 2])
>>> a.fill(0)
>>> a
array([0, 0])
>>> a = np.empty(2)
>>> a.fill(1)
>>> a
array([ 1.,  1.])
find(channel, start, end, frametype=None, pad=None, dtype=None, nproc=1, verbose=False, **readargs)[source]

Find and read data from frames for a channel

Parameters:

channel : str, Channel

the name of the channel to read, or a Channel object.

start : LIGOTimeGPS, float, str

GPS start time of required data, any input parseable by to_gps is fine

end : LIGOTimeGPS, float, str

GPS end time of required data, any input parseable by to_gps is fine

frametype : str, optional

name of frametype in which this channel is stored, will search for containing frame types if necessary

pad : float, optional

value with which to fill gaps in the source data, only used if gap is not given, or gap='pad' is given

nproc : int, optional, default: 1

number of parallel processes to use, serial process by default.

dtype : numpy.dtype, str, type, or dict

numeric data type for returned data, e.g. numpy.float, or dict of (channel, dtype) pairs

verbose : bool, optional

print verbose output about NDS progress.

**readargs :

any other keyword arguments to be passed to read()

flatten(order='C')

Return a copy of the array collapsed into one dimension.

Parameters:

order : {‘C’, ‘F’, ‘A’, ‘K’}, optional

‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if a is Fortran contiguous in memory, row-major order otherwise. ‘K’ means to flatten a in the order the elements occur in memory. The default is ‘C’.

Returns:

y : ndarray

A copy of the input array, flattened to one dimension.

See also

ravel
Return a flattened array.
flat
A 1-D flat iterator over the array.

Examples

>>> a = np.array([[1,2], [3,4]])
>>> a.flatten()
array([1, 2, 3, 4])
>>> a.flatten('F')
array([1, 3, 2, 4])
from_hdf5(*args, **kwargs)[source]

Read an array from the given HDF file.

This method has been deprecated in favour of the unified I/O method:

Class.write(..., format=’hdf5’)

from_lal(*args, **kwargs)[source]

Generate a new TimeSeries from a LAL TimeSeries of any type.

from_nds2_buffer(*args, **kwargs)[source]

Construct a new TimeSeries from an nds2.buffer object

Parameters:

buffer_ : nds2.buffer

the input NDS2-client buffer to read

**metadata :

any other metadata keyword arguments to pass to the TimeSeries constructor

Returns:

timeseries : TimeSeries

a new TimeSeries containing the data from the nds2.buffer, and the appropriate metadata

Notes

This classmethod requires the nds2-client package

from_pycbc(ts)[source]

Convert a pycbc.types.timeseries.TimeSeries into a TimeSeries

Parameters:

ts : pycbc.types.timeseries.TimeSeries

the input PyCBC TimeSeries array

Returns:

timeseries : TimeSeries

a GWpy version of the input timeseries

get(channel, start, end, pad=None, dtype=None, verbose=False, **kwargs)[source]

Get data for this channel from frames or NDS

This method dynamically accesses either frames on disk, or a remote NDS2 server to find and return data for the given interval

Parameters:

channel : str, Channel

the name of the channel to read, or a Channel object.

start : LIGOTimeGPS, float, str

GPS start time of required data, any input parseable by to_gps is fine

end : LIGOTimeGPS, float, str

GPS end time of required data, any input parseable by to_gps is fine

pad : float, optional

value with which to fill gaps in the source data, only used if gap is not given, or gap='pad' is given

dtype : numpy.dtype, str, type, or dict

numeric data type for returned data, e.g. numpy.float, or dict of (channel, dtype) pairs

nproc : int, optional, default: 1

number of parallel processes to use, serial process by default.

verbose : bool, optional

print verbose output about NDS progress.

**kwargs other keyword arguments to pass to either :

find() (for direct GWF file access) or fetch() for remote NDS2 access

See also

TimeSeries.fetch
for grabbing data from a remote NDS2 server
TimeSeries.find
for discovering and reading data from local GWF files
getfield(dtype, offset=0)

Returns a field of the given array as a certain type.

A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes.

Parameters:

dtype : str or dtype

The data type of the view. The dtype size of the view can not be larger than that of the array itself.

offset : int

Number of bytes to skip before beginning the element view.

Examples

>>> x = np.diag([1.+1.j]*2)
>>> x[1, 1] = 2 + 4.j
>>> x
array([[ 1.+1.j,  0.+0.j],
       [ 0.+0.j,  2.+4.j]])
>>> x.getfield(np.float64)
array([[ 1.,  0.],
       [ 0.,  2.]])

By choosing an offset of 8 bytes we can select the complex part of the array for our view:

>>> x.getfield(np.float64, offset=8)
array([[ 1.,  0.],
   [ 0.,  4.]])
insert(obj, values, axis=None)

Insert values along the given axis before the given indices and return a new Quantity object.

This is a thin wrapper around the numpy.insert function.

Parameters:

obj : int, slice or sequence of ints

Object that defines the index or indices before which values is inserted.

values : array-like

Values to insert. If the type of values is different from that of quantity, values is converted to the matching type. values should be shaped so that it can be broadcast appropriately The unit of values must be consistent with this quantity.

axis : int, optional

Axis along which to insert values. If axis is None then the quantity array is flattened before insertion.

Returns:

out : Quantity

A copy of quantity with values inserted. Note that the insertion does not occur in-place: a new quantity array is returned.

Examples

>>> import astropy.units as u
>>> q = [1, 2] * u.m
>>> q.insert(0, 50 * u.cm)
<Quantity [ 0.5,  1.,  2.] m>
>>> q = [[1, 2], [3, 4]] * u.m
>>> q.insert(1, [10, 20] * u.m, axis=0)
<Quantity [[  1.,  2.],
           [ 10., 20.],
           [  3.,  4.]] m>
>>> q.insert(1, 10 * u.m, axis=1)
<Quantity [[  1., 10.,  2.],
           [  3., 10.,  4.]] m>
is_compatible(other)[source]

Check whether this series and other have compatible metadata

This method tests that the sample size, and the unit match.

is_contiguous(other, tol=3.814697265625e-06)[source]

Check whether other is contiguous with self.

Parameters:

other : Series, numpy.ndarray

another series of the same type to test for contiguity

tol : float, optional

the numerical tolerance of the test

Returns:

1 :

if other is contiguous with this series, i.e. would attach seamlessly onto the end

-1 :

if other is anti-contiguous with this seires, i.e. would attach seamlessly onto the start

0 :

if other is completely dis-contiguous with thie series

Notes

if a raw numpy.ndarray is passed as other, with no metadata, then the contiguity check will always pass

item(*args)

Copy an element of an array to a standard Python scalar and return it.

Parameters:

*args : Arguments (variable number and type)

  • none: in this case, the method only works for arrays with one element (a.size == 1), which element is copied into a standard Python scalar object and returned.
  • int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return.
  • tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array.
Returns:

z : Standard Python scalar object

A copy of the specified element of the array as a suitable Python scalar

Notes

When the data type of a is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned.

item is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math.

Examples

>>> x = np.random.randint(9, size=(3, 3))
>>> x
array([[3, 1, 7],
       [2, 8, 3],
       [8, 5, 3]])
>>> x.item(3)
2
>>> x.item(7)
5
>>> x.item((0, 1))
1
>>> x.item((2, 2))
3
itemset(*args)

Insert scalar into an array (scalar is cast to array’s dtype, if possible)

There must be at least 1 argument, and define the last argument as item. Then, a.itemset(*args) is equivalent to but faster than a[args] = item. The item should be a scalar value and args must select a single item in the array a.

Parameters:

*args : Arguments

If one argument: a scalar, only used in case a is of size 1. If two arguments: the last argument is the value to be set and must be a scalar, the first argument specifies a single array element location. It is either an int or a tuple.

Notes

Compared to indexing syntax, itemset provides some speed increase for placing a scalar into a particular location in an ndarray, if you must do this. However, generally this is discouraged: among other problems, it complicates the appearance of the code. Also, when using itemset (and item) inside a loop, be sure to assign the methods to a local variable to avoid the attribute look-up at each loop iteration.

Examples

>>> x = np.random.randint(9, size=(3, 3))
>>> x
array([[3, 1, 7],
       [2, 8, 3],
       [8, 5, 3]])
>>> x.itemset(4, 0)
>>> x.itemset((2, 2), 9)
>>> x
array([[3, 1, 7],
       [2, 0, 3],
       [8, 5, 9]])
max(axis=None, out=None)

Return the maximum along a given axis.

Refer to numpy.amax for full documentation.

See also

numpy.amax
equivalent function
mean(axis=None, dtype=None, out=None, keepdims=False)

Returns the average of the array elements along given axis.

Refer to numpy.mean for full documentation.

See also

numpy.mean
equivalent function
median(axis=None, **kwargs)[source]

Compute the median along the specified axis.

Returns the median of the array elements.

Parameters:

a : array_like

Input array or object that can be converted to an array.

axis : {int, sequence of int, None}, optional

Axis or axes along which the medians are computed. The default is to compute the median along a flattened version of the array. A sequence of axes is supported since version 1.9.0.

out : ndarray, optional

Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary.

overwrite_input : bool, optional

If True, then allow use of memory of input array a for calculations. The input array will be modified by the call to median. This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. If overwrite_input is True and a is not already an ndarray, an error will be raised.

keepdims : bool, optional

If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original arr.

New in version 1.9.0.

Returns:

median : ndarray

A new array holding the result. If the input contains integers or floats smaller than float64, then the output data-type is np.float64. Otherwise, the data-type of the output is the same as that of the input. If out is specified, that array is returned instead.

See also

mean, percentile

Notes

Given a vector V of length N, the median of V is the middle value of a sorted copy of V, V_sorted - i e., V_sorted[(N-1)/2], when N is odd, and the average of the two middle values of V_sorted when N is even.

Examples

>>> a = np.array([[10, 7, 4], [3, 2, 1]])
>>> a
array([[10,  7,  4],
       [ 3,  2,  1]])
>>> np.median(a)
3.5
>>> np.median(a, axis=0)
array([ 6.5,  4.5,  2.5])
>>> np.median(a, axis=1)
array([ 7.,  2.])
>>> m = np.median(a, axis=0)
>>> out = np.zeros_like(m)
>>> np.median(a, axis=0, out=m)
array([ 6.5,  4.5,  2.5])
>>> m
array([ 6.5,  4.5,  2.5])
>>> b = a.copy()
>>> np.median(b, axis=1, overwrite_input=True)
array([ 7.,  2.])
>>> assert not np.all(a==b)
>>> b = a.copy()
>>> np.median(b, axis=None, overwrite_input=True)
3.5
>>> assert not np.all(a==b)
min(axis=None, out=None, keepdims=False)

Return the minimum along a given axis.

Refer to numpy.amin for full documentation.

See also

numpy.amin
equivalent function
nansum(axis=None, out=None, keepdims=False)
newbyteorder(new_order='S')

Return the array with the same data viewed with a different byte order.

Equivalent to:

arr.view(arr.dtype.newbytorder(new_order))

Changes are also made in all fields and sub-arrays of the array data type.

Parameters:

new_order : string, optional

Byte order to force; a value from the byte order specifications below. new_order codes can be any of:

  • ‘S’ - swap dtype from current to opposite endian
  • {‘<’, ‘L’} - little endian
  • {‘>’, ‘B’} - big endian
  • {‘=’, ‘N’} - native order
  • {‘|’, ‘I’} - ignore (no change to byte order)

The default value (‘S’) results in swapping the current byte order. The code does a case-insensitive check on the first letter of new_order for the alternatives above. For example, any of ‘B’ or ‘b’ or ‘biggish’ are valid to specify big-endian.

Returns:

new_arr : array

New array object with the dtype reflecting given change to the byte order.

nonzero()

Return the indices of the elements that are non-zero.

Refer to numpy.nonzero for full documentation.

See also

numpy.nonzero
equivalent function
override_unit(unit, parse_strict='raise')[source]

Forcefully reset the unit of these data

Use of this method is discouraged in favour of to(), which performs accurate conversions from one unit to another. The method should really only be used when the original unit of the array is plain wrong.

Parameters:

unit : Unit, str

the unit to force onto this array

Raises:

ValueError :

if a str cannot be parsed as a valid unit

pad(pad_width, **kwargs)[source]

Pad this series to a new size

Parameters:

pad_width : int, pair of ints

number of samples by which to pad each end of the array. Single int to pad both ends by the same amount, or (before, after) tuple to give uneven padding

**kwargs :

see numpy.pad() for kwarg documentation

Returns:

series : Series

the padded version of the input

See also

numpy.pad
for details on the underlying functionality
partition(kth, axis=-1, kind='introselect', order=None)

Rearranges the elements in the array in such a way that value of the element in kth position is in the position it would be in a sorted array. All elements smaller than the kth element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined.

New in version 1.8.0.

Parameters:

kth : int or sequence of ints

Element index to partition by. The kth element value will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order all elements in the partitions is undefined. If provided with a sequence of kth it will partition all elements indexed by kth of them into their sorted position at once.

axis : int, optional

Axis along which to sort. Default is -1, which means sort along the last axis.

kind : {‘introselect’}, optional

Selection algorithm. Default is ‘introselect’.

order : str or list of str, optional

When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties.

See also

numpy.partition
Return a parititioned copy of an array.
argpartition
Indirect partition.
sort
Full sort.

Notes

See np.partition for notes on the different algorithms.

Examples

>>> a = np.array([3, 4, 2, 1])
>>> a.partition(a, 3)
>>> a
array([2, 1, 3, 4])
>>> a.partition((1, 3))
array([1, 2, 3, 4])
plot(**kwargs)[source]

Plot the data for this TimeSeries

prepend(other, gap='raise', inplace=True, pad=0.0, resize=True)[source]

Connect another series onto the start of the current one.

Parameters:

other : Series

another series of the same type as this one

gap : str, optional, default: 'raise'

action to perform if there’s a gap between the other series and this one. One of

  • 'raise' - raise an Exception
  • 'ignore' - remove gap and join data
  • 'pad' - pad gap with zeros

inplace : bool, optional, default: True

perform operation in-place, modifying current series, otherwise copy data and return new series

Warning

inplace prepend bypasses the reference check in numpy.ndarray.resize, so be carefully to only use this for arrays that haven’t been sharing their memory!

pad : float, optional, default: 0.0

value with which to pad discontiguous Series

resize : bool, optional, default: True

Returns:

series : TimeSeries

time-series containing joined data sets

prod(axis=None, dtype=None, out=None, keepdims=False)

Return the product of the array elements over the given axis

Refer to numpy.prod for full documentation.

See also

numpy.prod
equivalent function
ptp(axis=None, out=None)

Peak to peak (maximum - minimum) value along a given axis.

Refer to numpy.ptp for full documentation.

See also

numpy.ptp
equivalent function
put(indices, values, mode='raise')

Set a.flat[n] = values[n] for all n in indices.

Refer to numpy.put for full documentation.

See also

numpy.put
equivalent function
ravel([order])

Return a flattened array.

Refer to numpy.ravel for full documentation.

See also

numpy.ravel
equivalent function
ndarray.flat
a flat iterator on the array.
read(*args, **kwargs)

Read in data

The arguments passed to this method depend on the format

The available built-in formats are:

Format Read Write Auto-identify
hdf Yes No No
hdf5 Yes Yes Yes
repeat(repeats, axis=None)

Repeat elements of an array.

Refer to numpy.repeat for full documentation.

See also

numpy.repeat
equivalent function
reshape(shape, order='C')

Returns an array containing the same data with a new shape.

Refer to numpy.reshape for full documentation.

See also

numpy.reshape
equivalent function
resize(new_shape, refcheck=True)

Change shape and size of array in-place.

Parameters:

new_shape : tuple of ints, or n ints

Shape of resized array.

refcheck : bool, optional

If False, reference count will not be checked. Default is True.

Returns:

None :

Raises:

ValueError :

If a does not own its own data or references or views to it exist, and the data memory must be changed.

SystemError :

If the order keyword argument is specified. This behaviour is a bug in NumPy.

See also

resize
Return a new array with the specified shape.

Notes

This reallocates space for the data area if necessary.

Only contiguous arrays (data elements consecutive in memory) can be resized.

The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set refcheck to False.

Examples

Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped:

>>> a = np.array([[0, 1], [2, 3]], order='C')
>>> a.resize((2, 1))
>>> a
array([[0],
       [1]])
>>> a = np.array([[0, 1], [2, 3]], order='F')
>>> a.resize((2, 1))
>>> a
array([[0],
       [2]])

Enlarging an array: as above, but missing entries are filled with zeros:

>>> b = np.array([[0, 1], [2, 3]])
>>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple
>>> b
array([[0, 1, 2],
       [3, 0, 0]])

Referencing an array prevents resizing...

>>> c = a
>>> a.resize((1, 1))
Traceback (most recent call last):
...
ValueError: cannot resize an array that has been referenced ...

Unless refcheck is False:

>>> a.resize((1, 1), refcheck=False)
>>> a
array([[0]])
>>> c
array([[0]])
round(decimals=0, out=None)

Return a with each element rounded to the given number of decimals.

Refer to numpy.around for full documentation.

See also

numpy.around
equivalent function
searchsorted(v, side='left', sorter=None)

Find indices where elements of v should be inserted in a to maintain order.

For full documentation, see numpy.searchsorted

See also

numpy.searchsorted
equivalent function
setfield(val, dtype, offset=0)

Put a value into a specified place in a field defined by a data-type.

Place val into a‘s field defined by dtype and beginning offset bytes into the field.

Parameters:

val : object

Value to be placed in field.

dtype : dtype object

Data-type of the field in which to place val.

offset : int, optional

The number of bytes into the field at which to place val.

Returns:

None :

See also

getfield

Examples

>>> x = np.eye(3)
>>> x.getfield(np.float64)
array([[ 1.,  0.,  0.],
       [ 0.,  1.,  0.],
       [ 0.,  0.,  1.]])
>>> x.setfield(3, np.int32)
>>> x.getfield(np.int32)
array([[3, 3, 3],
       [3, 3, 3],
       [3, 3, 3]])
>>> x
array([[  1.00000000e+000,   1.48219694e-323,   1.48219694e-323],
       [  1.48219694e-323,   1.00000000e+000,   1.48219694e-323],
       [  1.48219694e-323,   1.48219694e-323,   1.00000000e+000]])
>>> x.setfield(np.eye(3), np.int32)
>>> x
array([[ 1.,  0.,  0.],
       [ 0.,  1.,  0.],
       [ 0.,  0.,  1.]])
setflags(write=None, align=None, uic=None)

Set array flags WRITEABLE, ALIGNED, and UPDATEIFCOPY, respectively.

These Boolean-valued flags affect how numpy interprets the memory area used by a (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The UPDATEIFCOPY flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.)

Parameters:

write : bool, optional

Describes whether or not a can be written to.

align : bool, optional

Describes whether or not a is aligned properly for its type.

uic : bool, optional

Describes whether or not a is a copy of another “base” array.

Notes

Array flags provide information about how the memory area used for the array is to be interpreted. There are 6 Boolean flags in use, only three of which can be changed by the user: UPDATEIFCOPY, WRITEABLE, and ALIGNED.

WRITEABLE (W) the data area can be written to;

ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler);

UPDATEIFCOPY (U) this array is a copy of some other array (referenced by .base). When this array is deallocated, the base array will be updated with the contents of this array.

All flags can be accessed using their first (upper case) letter as well as the full name.

Examples

>>> y
array([[3, 1, 7],
       [2, 0, 0],
       [8, 5, 9]])
>>> y.flags
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  WRITEABLE : True
  ALIGNED : True
  UPDATEIFCOPY : False
>>> y.setflags(write=0, align=0)
>>> y.flags
  C_CONTIGUOUS : True
  F_CONTIGUOUS : False
  OWNDATA : True
  WRITEABLE : False
  ALIGNED : False
  UPDATEIFCOPY : False
>>> y.setflags(uic=1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: cannot set UPDATEIFCOPY flag to True
sort(axis=-1, kind='quicksort', order=None)

Sort an array, in-place.

Parameters:

axis : int, optional

Axis along which to sort. Default is -1, which means sort along the last axis.

kind : {‘quicksort’, ‘mergesort’, ‘heapsort’}, optional

Sorting algorithm. Default is ‘quicksort’.

order : str or list of str, optional

When a is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties.

See also

numpy.sort
Return a sorted copy of an array.
argsort
Indirect sort.
lexsort
Indirect stable sort on multiple keys.
searchsorted
Find elements in sorted array.
partition
Partial sort.

Notes

See sort for notes on the different sorting algorithms.

Examples

>>> a = np.array([[1,4], [3,1]])
>>> a.sort(axis=1)
>>> a
array([[1, 4],
       [1, 3]])
>>> a.sort(axis=0)
>>> a
array([[1, 3],
       [1, 4]])

Use the order keyword to specify a field to use when sorting a structured array:

>>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)])
>>> a.sort(order='y')
>>> a
array([('c', 1), ('a', 2)],
      dtype=[('x', '|S1'), ('y', '<i4')])
squeeze(axis=None)

Remove single-dimensional entries from the shape of a.

Refer to numpy.squeeze for full documentation.

See also

numpy.squeeze
equivalent function
std(axis=None, dtype=None, out=None, ddof=0, keepdims=False)

Returns the standard deviation of the array elements along given axis.

Refer to numpy.std for full documentation.

See also

numpy.std
equivalent function
sum(axis=None, dtype=None, out=None, keepdims=False)

Return the sum of the array elements over the given axis.

Refer to numpy.sum for full documentation.

See also

numpy.sum
equivalent function
swapaxes(axis1, axis2)

Return a view of the array with axis1 and axis2 interchanged.

Refer to numpy.swapaxes for full documentation.

See also

numpy.swapaxes
equivalent function
take(indices, axis=None, out=None, mode='raise')

Return an array formed from the elements of a at the given indices.

Refer to numpy.take for full documentation.

See also

numpy.take
equivalent function
to(unit, equivalencies=[])

Returns a new Quantity object with the specified units.

Parameters:

unit : UnitBase instance, str

An object that represents the unit to convert to. Must be an UnitBase object or a string parseable by the units package.

equivalencies : list of equivalence pairs, optional

A list of equivalence pairs to try if the units are not directly convertible. See Equivalencies. If not provided or [], class default equivalencies will be used (none for Quantity, but may be set for subclasses) If None, no equivalencies will be applied at all, not even any set globally or within a context.

to_dqflag(name=None, minlen=1, dtype=<type 'float'>, round=False, label=None, description=None)[source]

Convert this StateTimeSeries into a DataQualityFlag

Each contiguous set of True values are grouped as a Segment running from the GPS time the first found True, to the GPS time of the next False (or the end of the series)

Parameters:

minlen : int, optional, default: 1

minimum number of consecutive True values to identify as a Segment. This is useful to ignore single bit flips, for example.

dtype : type, callable, default: float

output segment entry type, can pass either a type for simple casting, or a callable function that accepts a float and returns another numeric type

round : bool, optional, default: False

choose to round each Segment to its inclusive integer boundaries

Returns:

dqflag : DataQualityFlag

a segment representation of this StateTimeSeries, the span defines the known segments, while the contiguous True sets defined each of the active segments

to_hdf5(*args, **kwargs)[source]

Convert this array to a h5py.Dataset.

This method has been deprecated in favour of the unified I/O method:

Class.write(..., format=’hdf5’)

to_lal(*args, **kwargs)[source]

Bogus function inherited from superclass, do not use.

to_pycbc(*args, **kwargs)[source]

Convert this TimeSeries into a PyCBC TimeSeries

Parameters:

copy : bool, optional, default: True

if True, copy these data to a new array

Returns:

timeseries : TimeSeries

a PyCBC representation of this TimeSeries

tobytes(order='C')

Construct Python bytes containing the raw data bytes in the array.

Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object can be produced in either ‘C’ or ‘Fortran’, or ‘Any’ order (the default is ‘C’-order). ‘Any’ order means C-order unless the F_CONTIGUOUS flag in the array is set, in which case it means ‘Fortran’ order.

New in version 1.9.0.

Parameters:

order : {‘C’, ‘F’, None}, optional

Order of the data for multidimensional arrays: C, Fortran, or the same as for the original array.

Returns:

s : bytes

Python bytes exhibiting a copy of a‘s raw data.

Examples

>>> x = np.array([[0, 1], [2, 3]])
>>> x.tobytes()
b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00'
>>> x.tobytes('C') == x.tobytes()
True
>>> x.tobytes('F')
b'\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00'
tofile(fid, sep="", format="%s")

Write array to a file as text or binary (default).

Data is always written in ‘C’ order, independent of the order of a. The data produced by this method can be recovered using the function fromfile().

Parameters:

fid : file or str

An open file object, or a string containing a filename.

sep : str

Separator between array items for text output. If “” (empty), a binary file is written, equivalent to file.write(a.tobytes()).

format : str

Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item.

Notes

This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size.

tolist()

Return the array as a (possibly nested) list.

Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible Python type.

Parameters:

none :

Returns:

y : list

The possibly nested list of array elements.

Notes

The array may be recreated, a = np.array(a.tolist()).

Examples

>>> a = np.array([1, 2])
>>> a.tolist()
[1, 2]
>>> a = np.array([[1, 2], [3, 4]])
>>> list(a)
[array([1, 2]), array([3, 4])]
>>> a.tolist()
[[1, 2], [3, 4]]
tostring(order='C')[source]

Construct Python bytes containing the raw data bytes in the array.

Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object can be produced in either ‘C’ or ‘Fortran’, or ‘Any’ order (the default is ‘C’-order). ‘Any’ order means C-order unless the F_CONTIGUOUS flag in the array is set, in which case it means ‘Fortran’ order.

This function is a compatibility alias for tobytes. Despite its name it returns bytes not strings.

Parameters:

order : {‘C’, ‘F’, None}, optional

Order of the data for multidimensional arrays: C, Fortran, or the same as for the original array.

Returns:

s : bytes

Python bytes exhibiting a copy of a‘s raw data.

Examples

>>> x = np.array([[0, 1], [2, 3]])
>>> x.tobytes()
b'\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00'
>>> x.tobytes('C') == x.tobytes()
True
>>> x.tobytes('F')
b'\x00\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00'
trace(offset=0, axis1=0, axis2=1, dtype=None, out=None)

Return the sum along diagonals of the array.

Refer to numpy.trace for full documentation.

See also

numpy.trace
equivalent function
transpose(*axes)

Returns a view of the array with axes transposed.

For a 1-D array, this has no effect. (To change between column and row vectors, first cast the 1-D array into a matrix object.) For a 2-D array, this is the usual matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and a.shape = (i[0], i[1], ... i[n-2], i[n-1]), then a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0]).

Parameters:

axes : None, tuple of ints, or n ints

  • None or no argument: reverses the order of the axes.
  • tuple of ints: i in the j-th place in the tuple means a‘s i-th axis becomes a.transpose()‘s j-th axis.
  • n ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form)
Returns:

out : ndarray

View of a, with axes suitably permuted.

See also

ndarray.T
Array property returning the array transposed.

Examples

>>> a = np.array([[1, 2], [3, 4]])
>>> a
array([[1, 2],
       [3, 4]])
>>> a.transpose()
array([[1, 3],
       [2, 4]])
>>> a.transpose((1, 0))
array([[1, 3],
       [2, 4]])
>>> a.transpose(1, 0)
array([[1, 3],
       [2, 4]])
update(other, inplace=True)[source]

Update this series by appending new data from an other and dropping the same amount of data off the start.

This is a convenience method that just calls append with resize=False.

value_at(x)[source]

Return the value of this Series at the given xindex value

Parameters:

x : float, Quantity

the xindex value at which to search

Returns:

y : Quantity

the value of this Series at the given xindex value

var(axis=None, dtype=None, out=None, ddof=0, keepdims=False)

Returns the variance of the array elements, along given axis.

Refer to numpy.var for full documentation.

See also

numpy.var
equivalent function
view(dtype=None, type=None)

New view of array with the same data.

Parameters:

dtype : data-type or ndarray sub-class, optional

Data-type descriptor of the returned view, e.g., float32 or int16. The default, None, results in the view having the same data-type as a. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the type parameter).

type : Python type, optional

Type of the returned view, e.g., ndarray or matrix. Again, the default None results in type preservation.

Notes

a.view() is used two different ways:

a.view(some_dtype) or a.view(dtype=some_dtype) constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory.

a.view(ndarray_subclass) or a.view(type=ndarray_subclass) just returns an instance of ndarray_subclass that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory.

For a.view(some_dtype), if some_dtype has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of a (shown by print(a)). It also depends on exactly how a is stored in memory. Therefore if a is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results.

Examples

>>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)])

Viewing array data using a different type and dtype:

>>> y = x.view(dtype=np.int16, type=np.matrix)
>>> y
matrix([[513]], dtype=int16)
>>> print(type(y))
<class 'numpy.matrixlib.defmatrix.matrix'>

Creating a view on a structured array so it can be used in calculations

>>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)])
>>> xv = x.view(dtype=np.int8).reshape(-1,2)
>>> xv
array([[1, 2],
       [3, 4]], dtype=int8)
>>> xv.mean(0)
array([ 2.,  3.])

Making changes to the view changes the underlying array

>>> xv[0,1] = 20
>>> print(x)
[(1, 20) (3, 4)]

Using a view to convert an array to a recarray:

>>> z = x.view(np.recarray)
>>> z.a
array([1], dtype=int8)

Views share data:

>>> x[0] = (9, 10)
>>> z[0]
(9, 10)

Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.:

>>> x = np.array([[1,2,3],[4,5,6]], dtype=np.int16)
>>> y = x[:, 0:2]
>>> y
array([[1, 2],
       [4, 5]], dtype=int16)
>>> y.view(dtype=[('width', np.int16), ('length', np.int16)])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: new type not compatible with array.
>>> z = y.copy()
>>> z.view(dtype=[('width', np.int16), ('length', np.int16)])
array([[(1, 2)],
       [(4, 5)]], dtype=[('width', '<i2'), ('length', '<i2')])
write(data, *args, **kwargs)

Write out data

The arguments passed to this method depend on the format

The available built-in formats are:

Format Read Write Auto-identify
hdf Yes Yes No
hdf5 Yes Yes Yes
zip()[source]

Zip the xindex and value arrays of this Series

Returns:

stacked : 2-d numpy.ndarray

The array formed by stacking the the xindex and value of this series

Examples

>>> a = Series([0, 2, 4, 6, 8], xindex=[-5, -4, -3, -2, -1])
>>> a.zip()
array([[-5.,  0.],
       [-4.,  2.],
       [-3.,  4.],
       [-2.,  6.],
       [-1.,  8.]])
class gwpy.timeseries.StateVectorDict(*args, **kwds)[source]

Bases: gwpy.timeseries.core.TimeSeriesBaseDict

Methods Summary

append(other[, copy])
clear(() -> None.  Remove all items from od.)
copy()
crop([start, end, copy]) Crop each entry of this TimeSeriesBaseDict.
fetch(*args, **kwargs) Fetch data from NDS for a number of channels.
find(channels, start, end[, frametype, pad, ...]) Find and read data from frames for a number of channels.
fromkeys((S[, ...) If not specified, the value defaults to None.
get(channels, start, end[, pad, dtype, ...]) Retrieve data for multiple channels from frames or NDS
has_key((k) -> True if D has a key k, else False)
items(() -> list of (key, value) pairs in od)
iteritems() od.iteritems -> an iterator over the (key, value) pairs in od
iterkeys(() -> an iterator over the keys in od)
itervalues() od.itervalues -> an iterator over the values in od
keys(() -> list of keys in od)
plot([label]) Plot the data for this TimeSeriesBaseDict.
pop((k[,d]) -> v, ...) value. If key is not found, d is returned if given, otherwise KeyError
popitem(() -> (k, v), ...) Pairs are returned in LIFO order if last is true or FIFO order if false.
prepend(other, **kwargs)
read(*args, **kwargs) Read data for multiple bit vector channels into a StateVectorDict
resample(rate, **kwargs) Resample items in this dict.
setdefault((k[,d]) -> od.get(k,d), ...)
update(([E, ...) If E present and has a .keys() method, does: for k in E: D[k] = E[k]
values(() -> list of values in od)
viewitems(...)
viewkeys(...)
viewvalues(...)
write(data, *args, **kwargs) Write this TimeSeriesDict to a file

Methods Documentation

append(other, copy=True, **kwargs)[source]
clear() → None. Remove all items from od.
copy()[source]
crop(start=None, end=None, copy=False)[source]

Crop each entry of this TimeSeriesBaseDict.

This method calls the crop() method of all entries and modifies this dict in place.

Parameters:

start : Time, float

GPS start time to crop TimeSeries at left

end : Time, float

GPS end time to crop TimeSeries at right

See also

TimeSeries.crop
for more details
fetch(*args, **kwargs)[source]

Fetch data from NDS for a number of channels.

Parameters:

channels : list

required data channels.

start : Time, or float

GPS start time of data span.

end : Time, or float

GPS end time of data span.

host : str, optional

URL of NDS server to use, defaults to observatory site host.

port : int, optional

port number for NDS server query, must be given with host.

verify : bool, optional, default: True

check channels exist in database before asking for data

verbose : bool, optional

print verbose output about NDS progress.

connection : NDS2Connection

open NDS connection to use.

type : int, str,

NDS2 channel type integer or string name.

dtype : numpy.dtype, str, type, or dict

numeric data type for returned data, e.g. numpy.float, or dict of (channel, dtype) pairs

Returns:

data : TimeSeriesBaseDict

a new TimeSeriesBaseDict of (str, TimeSeries) pairs fetched from NDS.

find(channels, start, end, frametype=None, pad=None, dtype=None, nproc=1, verbose=False, allow_tape=True, observatory=None, **readargs)[source]

Find and read data from frames for a number of channels.

Parameters:

channels : list

required data channels.

start : Time, or float

GPS start time of data span.

end : Time, or float

GPS end time of data span.

frametype : str, optional

name of frametype in which this channel is stored, by default will search for all required frame types

pad : float, optional

value with which to fill gaps in the source data, only used if gap is not given, or gap='pad' is given

dtype : numpy.dtype, str, type, or dict

numeric data type for returned data, e.g. numpy.float, or dict of (channel, dtype) pairs

nproc : int, optional, default: 1

number of parallel processes to use, serial process by default.

verbose : bool, optional

print verbose output about NDS progress.

allow_tape : bool, optional, default: True

allow reading from frames on tape

**readargs :

any other keyword arguments to be passed to read()

fromkeys(S[, v]) → New ordered dictionary with keys from S.

If not specified, the value defaults to None.

get(channels, start, end, pad=None, dtype=None, verbose=False, allow_tape=False, **kwargs)[source]

Retrieve data for multiple channels from frames or NDS

This method dynamically accesses either frames on disk, or a remote NDS2 server to find and return data for the given interval

Parameters:

channels : list

required data channels.

start : Time, or float

GPS start time of data span.

end : Time, or float

GPS end time of data span.

frametype : str, optional

name of frametype in which this channel is stored, by default will search for all required frame types

pad : float, optional

value with which to fill gaps in the source data, only used if gap is not given, or gap='pad' is given

dtype : numpy.dtype, str, type, or dict

numeric data type for returned data, e.g. numpy.float, or dict of (channel, dtype) pairs

nproc : int, optional, default: 1

number of parallel processes to use, serial process by default.

verbose : bool, optional

print verbose output about NDS progress.

allow_tape : bool, optional, default: False

allow the use of frames that are held on tape, default is False to attempt to allow the TimeSeries.fetch method to intelligently select a server that doesn’t use tapes for data storage (doesn’t always work)

**kwargs :

other keyword arguments to pass to either TimeSeriesBaseDict.find (for direct GWF file access) or TimeSeriesBaseDict.fetch for remote NDS2 access

has_key(k) → True if D has a key k, else False
items() → list of (key, value) pairs in od
iteritems()

od.iteritems -> an iterator over the (key, value) pairs in od

iterkeys() → an iterator over the keys in od
itervalues()

od.itervalues -> an iterator over the values in od

keys() → list of keys in od
plot(label='key', **kwargs)[source]

Plot the data for this TimeSeriesBaseDict.

Parameters:

label : str, optional

labelling system to use, or fixed label for all elements Special values include

  • 'key': use the key of the TimeSeriesBaseDict,
  • 'name': use the name of each element

If anything else, that fixed label will be used for all lines.

**kwargs :

all other keyword arguments are passed to the plotter as appropriate

pop(k[, d]) → v, remove specified key and return the corresponding

value. If key is not found, d is returned if given, otherwise KeyError is raised.

popitem() → (k, v), return and remove a (key, value) pair.

Pairs are returned in LIFO order if last is true or FIFO order if false.

prepend(other, **kwargs)[source]
classmethod read(*args, **kwargs)

Read data for multiple bit vector channels into a StateVectorDict

Parameters:

source : str, Cache

a single file path str, or a Cache containing a contiguous list of files.

channels : ChannelList, list

a list of channels to read from the source.

start : LIGOTimeGPS, float, str optional

GPS start time of required data, anything parseable by to_gps() is fine

end : LIGOTimeGPS, float, str, optional

GPS end time of required data, anything parseable by to_gps() is fine

bits : list of lists, dict, optional

the ordered list of interesting bit lists for each channel, or a dict of (channel, list) pairs

format : str, optional

source format identifier. If not given, the format will be detected if possible. See below for list of acceptable formats.

nproc : int, optional, default: 1

number of parallel processes to use, serial process by default.

Note

Parallel frame reading, via the nproc keyword argument, is only available when giving a Cache of frames, or using the format='cache' keyword argument.

gap : str, optional

how to handle gaps in the cache, one of

  • ‘ignore’: do nothing, let the undelying reader method handle it
  • ‘warn’: do nothing except print a warning to the screen
  • ‘raise’: raise an exception upon finding a gap (default)
  • ‘pad’: insert a value to fill the gaps

pad : float, optional

value with which to fill gaps in the source data, only used if gap is not given, or gap='pad' is given

Returns:

statevectordict : StateVectorDict

a StateVectorDict of (channel, StateVector) pairs. The keys are guaranteed to be the ordered list channels as given.

Notes

The available built-in formats are:

Format Read Write Auto-identify
framecpp Yes No No
gwf Yes No Yes
lalframe Yes No No
resample(rate, **kwargs)[source]

Resample items in this dict.

This operation over-writes items inplace.

Parameters:

rate : dict, float

either a dict of (channel, float) pairs for key-wise resampling, or a single float/int to resample all items.

kwargs :

other keyword arguments to pass to each item’s resampling method.

setdefault(k[, d]) → od.get(k,d), also set od[k]=d if k not in od
update([E, ]**F) → None. Update D from mapping/iterable E and F.

If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v

values() → list of values in od
viewitems() → a set-like object providing a view on od's items
viewkeys() → a set-like object providing a view on od's keys
viewvalues() → an object providing a view on od's values
write(data, *args, **kwargs)

Write this TimeSeriesDict to a file

Notes

The available built-in formats are:

Format Read Write Auto-identify
framecpp Yes Yes No