Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update internals to vectorised SciPy #94

Merged
merged 140 commits into from
Jul 20, 2017
Merged
Show file tree
Hide file tree
Changes from 20 commits
Commits
Show all changes
140 commits
Select commit Hold shift + click to select a range
577f114
Start work on changing internals to not cv
calum-chamberlain Mar 22, 2017
3b4cae0
Start working on internals
calum-chamberlain Apr 6, 2017
abfd0a3
Add links to scipy functions
calum-chamberlain May 20, 2017
2ccc6fe
Tests on synth data work for scipy internals
calum-chamberlain May 22, 2017
c8af6fc
Merge branch 'develop' into Xarray-internals
calum-chamberlain May 22, 2017
97ee390
Merge branch 'Xarray-internals' of https://github.com/EQcorrscan/EQco…
calum-chamberlain May 22, 2017
4817ad7
Update CI requirements
calum-chamberlain May 22, 2017
bdc9a56
enforce float64 in internals and float32 externally
calum-chamberlain May 23, 2017
bb6ba66
Hack to cope with padding template channels, should be adapted
calum-chamberlain May 25, 2017
ec7b524
Fix some fails, expect more
calum-chamberlain May 25, 2017
1e61f32
Adjust download timestamps for NCEDC data
calum-chamberlain May 29, 2017
211902e
Bump obspy version number for py 3
calum-chamberlain May 29, 2017
796df33
Fix channel assignment
calum-chamberlain May 29, 2017
928c5f9
merge conflict
calum-chamberlain May 30, 2017
3f9fd2a
Merge branch 'develop' into Xarray-internals
calum-chamberlain May 30, 2017
39bbe74
Merge branch 'develop' into Xarray-internals
calum-chamberlain Jun 2, 2017
e572c1d
Remove un-used dependancies
calum-chamberlain Jun 3, 2017
9d7eb5d
Try install with pip without dependencies
calum-chamberlain Jun 3, 2017
4b221e0
Update appveyor.yml
calum-chamberlain Jun 3, 2017
77c27f3
Update appveyor.yml
calum-chamberlain Jun 3, 2017
52b0402
Bump numpy dependancy and change Python on appveyor
calum-chamberlain Jun 4, 2017
5ef60ba
Remove unused dependancies
calum-chamberlain Jun 4, 2017
8e2b13d
Bump obspy version in appveyor
calum-chamberlain Jun 4, 2017
4fe20b9
Update numpy later in appveyor
calum-chamberlain Jun 4, 2017
72c4e51
osbpy 1.0.3 breaks requirements, grrr
calum-chamberlain Jun 4, 2017
d292f60
Try obspy install with setup.py
calum-chamberlain Jun 4, 2017
ee723fc
More ducking about with the obspy install
calum-chamberlain Jun 4, 2017
056636b
More ducking about with the obspy install
calum-chamberlain Jun 4, 2017
56dab9b
More ducking about with the obspy install
calum-chamberlain Jun 4, 2017
e150bb1
Do a full checkout of obspy
calum-chamberlain Jun 4, 2017
ac72711
Remove obspy directory post install
calum-chamberlain Jun 4, 2017
e7f84b1
Remove obspy directory post install
calum-chamberlain Jun 4, 2017
a13841d
job lib is no longer a dependancy
calum-chamberlain Jun 4, 2017
5e95962
Start removing old functions
calum-chamberlain Jun 5, 2017
b104425
Remove old functions and test
calum-chamberlain Jun 5, 2017
1cf0205
Remove cv2 import
calum-chamberlain Jun 5, 2017
1f92146
Correlation test more assertive
calum-chamberlain Jun 5, 2017
2e549fc
Typo in _spike_test
calum-chamberlain Jun 5, 2017
95a5710
Merge branch 'develop' into Xarray-internals
calum-chamberlain Jun 15, 2017
255d40f
Merge branch 'develop' into Xarray-internals
calum-chamberlain Jun 21, 2017
841310b
Temp changes:
calum-chamberlain Jul 2, 2017
ac7e533
Add test for large test-case
calum-chamberlain Jul 2, 2017
61321f7
Merge conflict
calum-chamberlain Jul 3, 2017
84d3b53
Compiled normalisation for memory efficiency
calum-chamberlain Jul 3, 2017
764f56b
Start debug travis and appveyor builds
calum-chamberlain Jul 3, 2017
fef99e8
Update accuracy of normalisation and refine tests
calum-chamberlain Jul 4, 2017
f3fd575
cope with zero std for first sample
calum-chamberlain Jul 4, 2017
29e72ae
Write loop in C for C normalisation
calum-chamberlain Jul 4, 2017
2c9d2a4
Add possible faster routine
calum-chamberlain Jul 4, 2017
b5a8310
Remove nans in normalise
calum-chamberlain Jul 4, 2017
599c16e
Finalise speed-up ormalisation routine
calum-chamberlain Jul 4, 2017
4865c28
Update for building on Windows
calum-chamberlain Jul 5, 2017
89282ed
Merge branch 'Xarray-internals' of https://github.com/eqcorrscan/EQco…
calum-chamberlain Jul 5, 2017
916627e
Force pytest to use 2 cores
calum-chamberlain Jul 5, 2017
79d0b0b
Feed str to libnames
calum-chamberlain Jul 5, 2017
576074f
Add def file for windows DLL exposure
calum-chamberlain Jul 5, 2017
9aea6d1
Merge branch 'Xarray-internals' of https://github.com/eqcorrscan/EQco…
calum-chamberlain Jul 5, 2017
0225f0d
Added working fftw cross-correlation C routines
calum-chamberlain Jul 7, 2017
5b9deeb
Force fftw install
calum-chamberlain Jul 7, 2017
7539916
Use system libraries
calum-chamberlain Jul 7, 2017
6991f94
Debug build not using library args:
calum-chamberlain Jul 7, 2017
93f5389
Try changing to cpp to get around MSVC lack of complex.h handling
calum-chamberlain Jul 10, 2017
f793e19
Change from using complex.h
calum-chamberlain Jul 10, 2017
957ae8f
Try forcing library dir to be found
calum-chamberlain Jul 10, 2017
c7dd9a9
Update travis setup.py call
calum-chamberlain Jul 10, 2017
159d72f
undo
calum-chamberlain Jul 10, 2017
3a4be06
change linking for unix systems
calum-chamberlain Jul 11, 2017
62aa96e
Merge branch 'Xarray-internals' of https://github.com/eqcorrscan/EQco…
calum-chamberlain Jul 11, 2017
8ac40db
Windows install
calum-chamberlain Jul 11, 2017
4becb2b
sort cland issues for OSX
calum-chamberlain Jul 11, 2017
639b7ac
change to fftw
calum-chamberlain Jul 11, 2017
7f65f2b
debug appveyor
calum-chamberlain Jul 11, 2017
73222f4
debuging
calum-chamberlain Jul 11, 2017
7265b47
debug travis
calum-chamberlain Jul 11, 2017
0288af8
Try system installs of fftw
calum-chamberlain Jul 11, 2017
5cfda19
Try system installs of fftw
calum-chamberlain Jul 11, 2017
b266d6a
Clean up travios and appveyor and setup
calum-chamberlain Jul 11, 2017
6c02c36
link gomp and add windows path
calum-chamberlain Jul 11, 2017
3c86b26
choco install curl and 7z
calum-chamberlain Jul 11, 2017
81c64ab
Don't test on py 3.3, obspy doesn't work
calum-chamberlain Jul 11, 2017
ab1a73b
Run normalisation in-line
calum-chamberlain Jul 12, 2017
269412a
cleanup
calum-chamberlain Jul 12, 2017
45c9e51
floating point math issues
calum-chamberlain Jul 12, 2017
7af48e3
yml fixes
calum-chamberlain Jul 12, 2017
509d332
7zip install
calum-chamberlain Jul 12, 2017
e85fa65
Update windows libraries
calum-chamberlain Jul 12, 2017
f0b16fd
get windows going locally
calum-chamberlain Jul 12, 2017
fb72057
Merge branch 'develop' into Xarray-internals
calum-chamberlain Jul 12, 2017
4a0c615
ci bugs
calum-chamberlain Jul 12, 2017
2c8480b
appveyor changes
calum-chamberlain Jul 12, 2017
af4377d
appveyor changes
calum-chamberlain Jul 12, 2017
c2ae7e3
Update ridiculous test
calum-chamberlain Jul 12, 2017
067b514
try a different install tactic
calum-chamberlain Jul 12, 2017
92c31f6
Merge branch 'develop' into Xarray-internals
calum-chamberlain Jul 12, 2017
a2dbf5d
Update for windows debug
calum-chamberlain Jul 12, 2017
819687d
Merge branch 'Xarray-internals' of https://github.com/EQcorrscan/EQco…
calum-chamberlain Jul 12, 2017
bd32dea
pytest changes and print path on windows
calum-chamberlain Jul 12, 2017
768e760
Remove issue-rich pytest arg config
calum-chamberlain Jul 12, 2017
d8f7077
Can't get auto working on travis
calum-chamberlain Jul 12, 2017
676b215
Debug appveyor issues
calum-chamberlain Jul 12, 2017
04cbd55
Update libnames.py
calum-chamberlain Jul 12, 2017
6a822d5
enforce static linking on windows
calum-chamberlain Jul 12, 2017
1f67aec
Sort rounding errors by internally using double-precision
calum-chamberlain Jul 14, 2017
bb5d179
build_ext in place
calum-chamberlain Jul 14, 2017
be2b2e8
appveyor fixes from ci-testing
calum-chamberlain Jul 17, 2017
f9ec900
Add fftw copyfile
calum-chamberlain Jul 17, 2017
73d9919
Add correlation test
calum-chamberlain Jul 17, 2017
5e1197f
Add setup dll file
calum-chamberlain Jul 17, 2017
d90469b
do allllll da tests
calum-chamberlain Jul 17, 2017
cf57e09
Change to pytest
calum-chamberlain Jul 17, 2017
f022de2
revert to trying to load the wrong thing
calum-chamberlain Jul 17, 2017
f32642b
pep8
calum-chamberlain Jul 17, 2017
df74bf3
libdir not libpath
calum-chamberlain Jul 17, 2017
f6ab25d
try in place build
calum-chamberlain Jul 17, 2017
f6eeeda
remove debug prints
calum-chamberlain Jul 17, 2017
c5e1979
only test on 2.7 on appveyor
calum-chamberlain Jul 17, 2017
c66b88e
CHANGES, install and docs
calum-chamberlain Jul 17, 2017
439b1b8
Add 2D fftw routines
calum-chamberlain Jul 18, 2017
ce3146b
Use fftw threads
calum-chamberlain Jul 18, 2017
15185a1
C89 standards
calum-chamberlain Jul 18, 2017
b7fe35e
Force more recent MSVC
calum-chamberlain Jul 18, 2017
748e882
Force compile as cpp
calum-chamberlain Jul 18, 2017
a6a8271
Try and get the threaded versions working on CI
calum-chamberlain Jul 18, 2017
819f634
pyflakes
calum-chamberlain Jul 18, 2017
6878ede
Require libxml
calum-chamberlain Jul 18, 2017
80ef087
Typo
calum-chamberlain Jul 18, 2017
04dbbc9
Add define structures for N_THREADS
calum-chamberlain Jul 18, 2017
91e8775
Try pre-processor ifdef
calum-chamberlain Jul 18, 2017
5318a31
link errors
calum-chamberlain Jul 18, 2017
9bcdcce
link errors
calum-chamberlain Jul 18, 2017
c5e2a12
Minor textual changes
calum-chamberlain Jul 19, 2017
a86c3e0
Include spike_test in match_filter
calum-chamberlain Jul 19, 2017
7c46ef7
parallel fftw runs in 1D to avoid non-threadsafety
calum-chamberlain Jul 19, 2017
1492c09
Merge branch 'develop' into Xarray-internals
calum-chamberlain Jul 19, 2017
8bddade
time domain in double precision
calum-chamberlain Jul 19, 2017
e188aed
Cleaning up
calum-chamberlain Jul 20, 2017
17f3bc4
Merge branch 'develop' into Xarray-internals
calum-chamberlain Jul 20, 2017
6fc913e
Import naming changes
calum-chamberlain Jul 20, 2017
ba88ab1
Merge branch 'Xarray-internals' of https://github.com/eqcorrscan/EQco…
calum-chamberlain Jul 20, 2017
c4bb36a
pep8
calum-chamberlain Jul 20, 2017
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ install:
PYFLAKES="pyflakes=0.9.0"
fi
- echo $PYTHON_VERSION
- conda create -q -n test-environment python=$PYTHON_VERSION colorama numpy scipy matplotlib obspy flake8 mock coverage opencv3 bottleneck
- conda create -q -n test-environment python=$PYTHON_VERSION colorama numpy scipy matplotlib obspy flake8 mock coverage opencv3 bottleneck xarray
- source activate test-environment
- conda install $PYFLAKES
- conda install pyproj
Expand Down
1 change: 1 addition & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ will not be used.
* Stop enforcing two-channel template channel names.
* Fix bug in detection_multiplot which didn't allow streams with
fewer traces than template;
* Update internals to SciPy correlation rather than openCV (Major change);


## 0.1.6
Expand Down
3 changes: 2 additions & 1 deletion appveyor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,8 @@ install:
build: false

test_script:
- "%CMD_IN_ENV% python setup.py develop"
- "%CMD_IN_ENV% python setup.py build"
- "%CMD_IN_ENV% pip install . --no-deps"
- "%CMD_IN_ENV% py.test --ignore=eqcorrscan/tests/tutorial_test.py"

after_test:
Expand Down
230 changes: 194 additions & 36 deletions eqcorrscan/core/match_filter.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@
from obspy import Trace, Catalog, UTCDateTime, Stream, read, read_events
from obspy.core.event import Event, Pick, CreationInfo, ResourceIdentifier
from obspy.core.event import Comment, WaveformStreamID
from scipy.fftpack import next_fast_len

from eqcorrscan.utils.timer import Timer
from eqcorrscan.utils.findpeaks import find_peaks2_short, decluster
Expand All @@ -51,6 +52,31 @@
from eqcorrscan.core.lag_calc import lag_calc


def _spike_test(stream, percent=0.99, multiplier=1e6):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this for avoiding bottleneck: 164 ? It might be worth adding a bit in the doc string about why this function might be used.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not, this was for an issue from openCV correlations - I'm not sure if it is needed for the scipy internals - I need to check that and if it's not needed I should remove it - it's really slow and at least one user has complained!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I have sped this up a lot, and it looks like it is needed to stabilise ffts.

"""
Check for very large spikes in data and raise an error if found.

:param stream: Stream to look for spikes in.
:type stream: :class:`obspy.core.stream.Stream`
:param percent: Percentage as a decimal to calcualte range for.
:type percent: float
:param multiplier: Multiplier of range to define a spike.
:type multiplier: float
"""
for tr in stream:
if (tr.data > 2 * np.max(
np.sort(np.abs(
tr))[0:int(percent * len(tr.data))]) * multiplier).sum() > 0:
msg = ('Spikes above ' + str(multiplier) +
' of the range of ' + str(percent) +
' of the data present, check. \n ' +
'This would otherwise likely result in an issue during ' +
'FFT prior to cross-correlation.\n' +
'If you think this spike is real please report ' +
'this as a bug.')
raise MatchFilterError(msg)


class MatchFilterError(Exception):
"""
Default error for match-filter errors.
Expand Down Expand Up @@ -3360,6 +3386,170 @@ def normxcorr2(template, image):
return ccc


def multi_normxcorr(templates, stream, pads):
"""
Compute the normalized cross-correlation of multiple templates with data.
:param templates: 2D Array of templates
:type templates: np.ndarray
:param stream: 1D array of continuous data
:type stream: np.ndarray
:param pads: List of ints of pad lengths in the same order as templates
:type pads: list

:return: np.ndarray
"""
# TODO:: Try other fft methods: pyfftw?
import bottleneck
from scipy.signal.signaltools import _centered
from scipy.fftpack.helper import next_fast_len

# Generate a template mask
used_chans = ~np.isnan(templates).any(axis=1)
# Currently have to use float64 as bottleneck runs into issues with other
# types: https://github.com/kwgoodman/bottleneck/issues/164
stream = stream.astype(np.float64)
templates = templates.astype(np.float64)
template_length = templates.shape[1]
stream_length = len(stream)
fftshape = next_fast_len(template_length + stream_length - 1)
# Set up normalizers
stream_mean_array = bottleneck.move_mean(
stream, template_length)[template_length - 1:]
stream_std_array = bottleneck.move_std(
stream, template_length)[template_length - 1:]
# Normalize and flip the templates
norm = ((templates - templates.mean(axis=-1, keepdims=True)) / (
templates.std(axis=-1, keepdims=True) * template_length))
norm_sum = norm.sum(axis=-1, keepdims=True)
stream_fft = np.fft.rfft(stream, fftshape)
template_fft = np.fft.rfft(np.flip(norm, axis=-1), fftshape, axis=-1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Numpy's flip was added in version 1.12.0 so we need to make sure and bump the version in the setup.py

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Obspy 1.0.3 does not play nice on Windows, but the current master does, I'm going to keep the appveyor running the osbpy master and travis running the current release until the next obspy release when appveyor should revert to the current obspy release. I think it comes down to their pinning of matplotlib...?

res = np.fft.irfft(template_fft * stream_fft,
fftshape)[:, 0:template_length + stream_length - 1]
res = ((_centered(res, stream_length - template_length + 1)) -
norm_sum * stream_mean_array) / stream_std_array
for i in range(len(pads)):
# This is a hack from padding templates with nan data
if np.isnan(res[i]).all():
res[i] = np.zeros(len(res[i]))
else:
res[i] = np.append(res[i], np.zeros(pads[i]))[pads[i]:]
return res.astype(np.float32), used_chans


def multichannel_xcorr(templates, stream, use_dask=False, compute=True,
cores=1):
"""
Cross-correlate multiple channels either in parallel or not

:type templates: list
:param templates:
A list of templates, where each one should be an obspy.Stream object
containing multiple traces of seismic data and the relevant header
information.
:type stream: obspy.core.stream.Stream
:param stream:
A single Stream object to be correlated with the templates. This is
in effect the image in normxcorr2 and cv2.
:type dask: bool
:param dask:
Whether to use dask for multiprocessing or not, if False, will use
python native multiprocessing.
:type compute: bool
:param compute:
Only valid if dask==True. If compute==False, returned result with be
a dask.delayed object, useful if you are using dask to compute multiple
time-steps at the same time.
:type cores: int
:param cores:
Number of processed to use, if set to None, and dask==False, no
multiprocessing will be done.
:type cores: int
:param cores: Number of cores to loop over

:returns:
New list of :class:`numpy.ndarray` objects. These will contain
the correlation sums for each template for this day of data.
:rtype: list
:returns:
list of ints as number of channels used for each cross-correlation.
:rtype: list
:returns:
list of list of tuples of station, channel for all cross-correlations.
:rtype: list

.. Note::
Each template must contain the same channels as every other template,
the stream must also contain the same channels (note that if there
are duplicate channels in the template you do not need duplicate
channels in the stream).
"""
no_chans = np.zeros(len(templates))
chans = [[] for _i in range(len(templates))]
# Do some reshaping
stream.sort(['network', 'station', 'location', 'channel'])
t_starts = []
for template in templates:
template.sort(['network', 'station', 'location', 'channel'])
t_starts.append(min([tr.stats.starttime for tr in template]))
seed_ids = [tr.id + '_' + str(i) for i, tr in enumerate(templates[0])]
template_array = {}
stream_array = {}
pad_array = {}
for i, seed_id in enumerate(seed_ids):
t_ar = np.array([template[i].data for template in templates])
template_array.update({seed_id: t_ar})
stream_array.update(
{seed_id: stream.select(id=seed_id.split('_')[0])[0].data})
pad_list = [
int(round(template[i].stats.sampling_rate *
(template[i].stats.starttime - t_starts[j])))
for j, template in zip(range(len(templates)), templates)]
pad_array.update({seed_id: pad_list})
# if use_dask:
# import dask
# xcorrs = []
# for seed_id in seed_ids:
# tr_xcorrs, tr_chans = dask.delayed(multi_normxcorr)(
# templates=template_array[seed_id],
# stream=stream.select(id=seed_id.split('_')[0])[0].data)
# xcorrs.append(tr_xcorrs)
# cccsums = dask.delayed(np.sum)(xcorrs, axis=0)
# if compute:
# cccsums.compute()
if cores is None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to abstract the multiprocessing further up? I know several functions use something similar so maybe we could make a generic pool interface on the module level, then we could have persistent processes/threads in the pool so we wouldn't need to spin them up every time.

cccsums = np.zeros([len(templates),
len(stream[0]) - len(templates[0][0]) + 1])
for seed_id in seed_ids:
tr_xcorrs, tr_chans = multi_normxcorr(
templates=template_array[seed_id],
stream=stream_array[seed_id], pads=pad_array[seed_id])
cccsums = np.sum([cccsums, tr_xcorrs], axis=0)
no_chans += tr_chans.astype(np.int)
for chan, state in zip(chans, tr_chans):
if state:
chan.append((seed_id.split('.')[1],
seed_id.split('.')[-1].split('_')[0]))
else:
pool = Pool(processes=cores)
results = [pool.apply_async(
multi_normxcorr, (template_array[seed_id], stream_array[seed_id],
pad_array[seed_id]))
for seed_id in seed_ids]
pool.close()
results = [p.get() for p in results]
xcorrs = [p[0] for p in results]
tr_chans = np.array([p[1] for p in results])
pool.join()
cccsums = np.sum(xcorrs, axis=0)
no_chans = np.sum(tr_chans.astype(np.int), axis=0)
for seed_id, tr_chan in zip(seed_ids, tr_chans):
for chan, state in zip(chans, tr_chan):
if state:
chan.append((seed_id.split('.')[1],
seed_id.split('.')[-1].split('_')[0]))
return cccsums, no_chans, chans


def _template_loop(template, chan, stream_ind, debug=0, i=0):
"""
Handle individual template correlations.
Expand Down Expand Up @@ -3421,7 +3611,7 @@ def _template_loop(template, chan, stream_ind, debug=0, i=0):
return i, ccc


def _channel_loop(templates, stream, cores=1, debug=0, internal=True):
def _channel_loop(templates, stream, cores=1, debug=0):
"""
Internal loop for parallel processing.

Expand All @@ -3442,10 +3632,6 @@ def _channel_loop(templates, stream, cores=1, debug=0, internal=True):
:param cores: Number of cores to loop over
:type debug: int
:param debug: Debug level.
:type internal: bool
:param internal:
Whether to use the internal Python code (True) or the experimental
compilled code.

:returns:
New list of :class:`numpy.ndarray` objects. These will contain
Expand All @@ -3464,8 +3650,6 @@ def _channel_loop(templates, stream, cores=1, debug=0, internal=True):
are duplicate channels in the template you do not need duplicate
channels in the stream).
"""
if not internal:
print('Not yet coded')
num_cores = cores
if len(templates) < num_cores:
num_cores = len(templates)
Expand Down Expand Up @@ -3809,7 +3993,7 @@ def match_filter(template_names, template_list, st, threshold,
raise MatchFilterError(msg)
outtic = time.clock()
if debug >= 2:
print('Ensuring all template channels have matches in long data')
print('Ensuring all template channels have matches in continuous data')
template_stachan = {}
# Work out what station-channel pairs are in the templates, including
# duplicate station-channel pairs. We will use this information to fill
Expand Down Expand Up @@ -3908,9 +4092,8 @@ def match_filter(template_names, template_list, st, threshold,
for template in templates:
print(template)
print(stream)
[cccsums, no_chans, chans] = _channel_loop(
templates=templates, stream=stream, cores=cores, debug=debug,
internal=internal)
[cccsums, no_chans, chans] = multichannel_xcorr(
templates=templates, stream=stream, cores=cores)
if len(cccsums[0]) == 0:
raise MatchFilterError('Correlation has not run, zero length cccsum')
outtoc = time.clock()
Expand Down Expand Up @@ -4029,31 +4212,6 @@ def match_filter(template_names, template_list, st, threshold,
return detections, det_cat, detection_streams


def _spike_test(stream, percent=0.99, multiplier=1e6):
"""
Check for very large spikes in data and raise an error if found.

:param stream: Stream to look for spikes in.
:type stream: :class:`obspy.core.stream.Stream`
:param percent: Percentage as a decimal to calcualte range for.
:type percent: float
:param multiple: Multiplier of range to define a spike.
:type multiple: float
"""
for tr in stream:
if (tr.data > 2 * np.max(
np.sort(np.abs(
tr))[0:int(percent * len(tr.data))]) * multiplier).sum() > 0:
msg = ('Spikes above ' + str(multiplier) +
' of the range of ' + str(percent) +
' of the data present, check. \n ' +
'This would otherwise likely result in an issue during ' +
'FFT prior to cross-correlation.\n' +
'If you think this spike is real please report ' +
'this as a bug.')
raise MatchFilterError(msg)


if __name__ == "__main__":
import doctest
doctest.testmod()
Expand Down
11 changes: 8 additions & 3 deletions eqcorrscan/tests/match_filter_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -341,7 +341,7 @@ def setUpClass(cls):
(tr.stats.network, tr.stats.station, tr.stats.channel))
template_stachans = list(set(template_stachans))
bulk_info = [(stachan[0], stachan[1], '*', stachan[2],
t1, t1 + process_len)
t1, t1 + process_len + 1)
for stachan in template_stachans]
# Just downloading an hour of data
print('Downloading continuous data')
Expand Down Expand Up @@ -645,8 +645,13 @@ def test_tribe_detect(self):
for key in det.__dict__.keys():
if key == 'event':
continue
self.assertEqual(det.__dict__[key],
check_det.__dict__[key])
if isinstance(det.__dict__[key], float):
self.assertAlmostEqual(
det.__dict__[key], check_det.__dict__[key],
places=2)
else:
self.assertEqual(
det.__dict__[key], check_det.__dict__[key])
# self.assertEqual(fam.template, check_fam.template)

def test_client_detect(self):
Expand Down
13 changes: 6 additions & 7 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,21 +72,20 @@
if not READ_THE_DOCS:
install_requires = ['numpy>=1.8.0', 'obspy>=1.0.0',
'matplotlib>=1.3.0', 'joblib>=0.8.4',
'scipy>=0.14', 'multiprocessing',
'LatLon', 'h5py', 'cython', 'bottleneck']
'scipy>=0.18', 'LatLon', 'cython',
'bottleneck', 'xarray']
else:
install_requires = ['numpy>=1.8.0', 'obspy>=1.0.0',
'matplotlib>=1.3.0', 'joblib>=0.8.4',
'multiprocessing',
'LatLon']
else:
if not READ_THE_DOCS:
install_requires = ['numpy>=1.8.0', 'obspy>=0.10.2',
install_requires = ['numpy>=1.8.0', 'obspy>=1.0.0',
'matplotlib>=1.3.0', 'joblib>=0.8.4',
'scipy>=0.14', 'LatLon', 'h5py', 'cython',
'bottleneck']
'scipy>=0.18', 'LatLon', 'cython',
'bottleneck', 'xarray']
else:
install_requires = ['numpy>=1.8.0', 'obspy>=0.10.2',
install_requires = ['numpy>=1.8.0', 'obspy>=1.0.0',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as mentioned above we need at least np 1.12.0 because we are using flip

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

'matplotlib>=1.3.0', 'joblib>=0.8.4',
'LatLon']
# install_requires.append('ConfigParser')
Expand Down