as_strided()¶
The as_strided()
function can be used to create complex views of an array.
Consider this 2x4 integer array called foo
.
Recall: arrays are stored in contiguous, fixedsize memory blocks. In this case, foo
is stored in memory like this.
Since foo
is comprised of 64bit integers, each block of memory is 64 bits. Alternatively stated, each block of memory is 8 bytes.
To fetch the element at position (i,j), NumPy could do the following:
 Start at the beginning of the memory block
 jump across i * 32 bytes of data to get to row i
 jump across j * 8 bytes to get to the jth element in row i
This is exactly what the strides
attribute of a numpy array describes. For example, foo.strides
returns the tuple (32, 8), meaning "axis 0 iterates by 32 bytes, axis 1 iterates by 8 bytes".
With np.lib.stride_tricks.as_strided()
, you can create a new view of an existing array by modifying its strides without copying or modifying its data.
Example
Here we define a 3x4 array that's based on the data in foo
, but we tell numpy to jump across 16 bytes to get to the next row and 8 bytes to get to the next column.
For example, if we request the element at index (1,0), numpy starts at the beginning of foo
and then jumps across 16 bytes (one row) plus 0 bytes (0 columns), landing at 30.
To get to index (1,3), numpy jumps across 16 bytes (one row) plus 24 bytes (one column) = 40 bytes, landing at 60.
Warning
It's really important to understand that bar
is a view of foo
. If we modify bar
, foo
will be modified as well.
Example
foo
changed even though we modified bar
.
Furthermore, if we print(bar)
you can see that element (1,0) and element (0,2) changed.
That's because these elements in bar
point to the same block of memory.
Danger
When using as_strided()
, be careful that your strides make sense. Otherwise you may end up pointing to memory used by a different variable. This can have bad side effects, so beware.
sliding_window_view()¶
NumPy has a convenient sliding_window_view()
function for making sliding windows.
Relation to as_strided()
Under the hood, sliding_window_view()
is just a fancy wrapper for as_strided()
. Anything you can do with sliding_window_view()
, you could also do with as_strided()
.
For example, given this array
we can make a sliding window with length 3 like this.
The result is a readonly view of foo
. If you try to modify fooview
, you'll get an error!
We can make it writeable with writeable=True
.
foo
is changed too!
Since fooview
is a view of foo
, modifying fooview
also modifies foo
!
sliding_window_view()
works for multidimensional arrays too. Consider this 3x3 array, zoo
.
We can create various sliding windows from zoo
by varying the window_shape
and axis
parameters.
With window_shape=2
, NumPy searches for subarrays inside zoo
like [a, b]
.
In pseudocode, you could describe this algorithm as

Iterate over the elements of
zoo
in rowmajor order. 
At each step, move along axis 0 to fill an array with shape
(2,)
.
In pseudocode, you could describe this algorithm as

Iterate over the elements of
zoo
in rowmajor order. 
At each step, move along axis 1 to fill an array with shape
(2,)
.
With window_shape=2
, NumPy will search for subarrays inside zoo
like
In pseudocode, you could describe this algorithm as

Iterate over the elements of
zoo
in rowmajor order. 
At each step, move along axis 0 first, then axis 1 to fill an array with shape
(2,2)
.
In pseudocode, you could describe this algorithm as

Iterate over the elements of
zoo
in rowmajor order. 
At each step, move along axis 1 first, then axis 0 to fill an array with shape
(2,2)
.
einsum()¶
You can use the einsum()
function to quickly and efficiently multiply and sum arrays
using einstein sums.
Consider these 1d arrays A
and B
.
If we do np.einsum('i,j>i', A, B)
we get back the array [ 0, 22, 44, 66]
.
In pseudocode, you could describe the above algorithm as
The first parameter of einsum()
is subscripts
. Assuming we're operating on two arrays A
and B
, it always has the form
"subscripts for A's axes, subscripts for B's axes > subscripts for output axes"
In this case,
A
has one dimension so we give it the subscript iB
has one dimension so we give it the subscribt j By reusing subscript i for the output axes, we're saying "the output array has one dimension and it's the same length as
A
". In other words, element $A_{i}$ will always feed into a corresponding $Y_{i}$.
Example 1¶
In pseudocode,
Since there's no subscript to the right of the arrow in the subscript string, we're telling NumPy that our output should have 0 dimensions. In other words, the output should be a scalar. And like before, we iterate over each i in A
and each j in B
, adding $A_{i}∗B_{j}$ to the sum.
Example 3¶
In pseudocode,
Note
The output should be 2dimensional because it has two subscripts: s
and t
. Since s
iterates over A
(length 4) and t
iterates over B
(length 4), we know the output should have shape (4,4).
Example 4¶
einsum()
really starts to shine in two dimensions. Consider these 2x2 arrays C
and D
and observe this einstein sum
Let's breakdown the subscript string.
ij
tells us the first array has exactly two dimensionsji
tells us the second array also has exactly two dimensions. Furthermore, the length of its first dimension matches the length of the first array's second dimension since they both use subscript j, and the length of its second dimension matches the length of the first array's first dimension since they both use subscript i. The bit after the arrow is empty, so we know the output will have 0 dimensions (it'll be a scalar).
In pseudocode,
Tip
On the surface, this particular einstein sum is equivalent to doing np.sum(C * D.T)
. However, einsum()
only accesses each element once wheras np.sum(C * D.T)
accesses each element twice  first when it does the multiplication and second when it does the sum. More importantly though, np.sum(C * D.T)
creates a temporary array that takes up memory before gettting summed into a scalar. einsum()
avoids this memory consumption, which, if you're dealing with big arrays, can make a significant difference.