API Documentation

class walrus.Database(Redis)

Redis-py client with some extras.

Array(key)

Create a Array instance wrapping the given key.

Hash(key)

Create a Hash instance wrapping the given key.

HyperLogLog(key)

Create a HyperLogLog instance wrapping the given key.

Index(name, **options)

Create a Index full-text search index with the given name and options.

List(key)

Create a List instance wrapping the given key.

Set(key)

Create a Set instance wrapping the given key.

Stream(key)

Create a Stream instance wrapping the given key.

ZSet(key)

Create a ZSet instance wrapping the given key.

__init__(*args, **kwargs)
Parameters:
  • args – Arbitrary positional arguments to pass to the base Redis instance.

  • kwargs – Arbitrary keyword arguments to pass to the base Redis instance.

  • script_dir (str) – Path to directory containing walrus scripts. Use “script_dir=False” to disable loading any scripts.

__iter__()

Iterate over the keys of the selected database.

bit_field(key)

Container for working with the Redis BITFIELD command.

Returns:

a BitField instance.

bloom_filter(key, size=65536)

Create a BloomFilter container type.

Bloom-filters are probabilistic data-structures that are used to answer the question: “is X a member of set S?” It is possible to receive a false positive, but impossible to receive a false negative (in other words, if the bloom filter contains a value, it will never erroneously report that it does not contain such a value). The accuracy of the bloom-filter and the likelihood of a false positive can be reduced by increasing the size of the bloomfilter. The default size is 64KB (or 524,288 bits).

cache(name='cache', default_timeout=3600)

Create a Cache instance.

Parameters:
  • name (str) – The name used to prefix keys used to store cached data.

  • default_timeout (int) – The default key expiry.

Returns:

A Cache instance.

cas(key, value, new_value)

Perform an atomic compare-and-set on the value in “key”, using a prefix match on the provided value.

consumer_group(group, keys, consumer=None)

Create a named ConsumerGroup instance for the given key(s).

Parameters:
  • group – name of consumer group

  • keys – stream identifier(s) to monitor. May be a single stream key, a list of stream keys, or a key-to-minimum id mapping. The minimum id for each stream should be considered an exclusive lower-bound. The ‘$’ value can also be used to only read values added after our command started blocking.

  • consumer – name for consumer within group

Returns:

a ConsumerGroup instance

counter(name)

Create a Counter instance.

Parameters:

name (str) – The name used to store the counter’s value.

Returns:

A Counter instance.

get_key(key)

Return a rich object for the given key. For instance, if a hash key is requested, then a Hash will be returned.

Note: only works for Hash, List, Set and ZSet.

Parameters:

key (str) – Key to retrieve.

Returns:

A hash, set, list, zset or array.

get_temp_key()

Generate a temporary random key using UUID4.

graph(name, *args, **kwargs)

Creates a Graph instance.

Parameters:

name (str) – The namespace for the graph metadata.

Returns:

a Graph instance.

listener(channels=None, patterns=None, is_async=False)

Decorator for wrapping functions used to listen for Redis pub-sub messages.

The listener will listen until the decorated function raises a StopIteration exception.

Parameters:
  • channels (list) – Channels to listen on.

  • patterns (list) – Patterns to match.

  • is_async (bool) – Whether to start the listener in a separate thread.

lock(name, ttl=None, lock_id=None)

Create a named Lock instance. The lock implements an API similar to the standard library’s threading.Lock, and can also be used as a context manager or decorator.

Parameters:
  • name (str) – The name of the lock.

  • ttl (int) – The time-to-live for the lock in milliseconds (optional). If the ttl is None then the lock will not expire.

  • lock_id (str) – Optional identifier for the lock instance.

rate_limit(name, limit=5, per=60, debug=False)

Rate limit implementation. Allows up to limit of events every per seconds.

See Rate Limit for more information.

rate_limit_lua(name, limit=5, per=60, debug=False)

Rate limit implementation. Allows up to limit of events every per seconds. Uses a Lua script for atomicity.

See Rate Limit for more information.

run_script(script_name, keys=None, args=None)

Execute a walrus script with the given arguments.

Parameters:
  • script_name – The base name of the script to execute.

  • keys (list) – Keys referenced by the script.

  • args (list) – Arguments passed in to the script.

Returns:

Return value of script.

Note

Redis scripts require two parameters, keys and args, which are referenced in lua as KEYS and ARGV.

search(pattern)

Search the keyspace of the selected database using the given search pattern.

Parameters:

pattern (str) – Search pattern using wildcards.

Returns:

Iterator that yields matching keys.

stream_log(callback, connection_id='monitor')

Stream Redis activity one line at a time to the given callback.

Parameters:

callback – A function that accepts a single argument, the Redis command.

time_series(group, keys, consumer=None)

Create a named TimeSeries consumer-group for the given key(s). TimeSeries objects are almost identical to ConsumerGroup except they offer a higher level of abstraction and read/write message ids as datetimes.

Parameters:
  • group – name of consumer group

  • keys – stream identifier(s) to monitor. May be a single stream key, a list of stream keys, or a key-to-minimum id mapping. The minimum id for each stream should be considered an exclusive lower-bound. The ‘$’ value can also be used to only read values added after our command started blocking.

  • consumer – name for consumer within group

Returns:

a TimeSeries instance

xsetid(name, id)

Set the last ID of the given stream.

Parameters:
  • name – stream identifier

  • id – new value for last ID

Container types

class walrus.Container(database, key)

Base-class for rich Redis object wrappers.

clear()

Clear the contents of the container by deleting the key.

dump()

Dump the contents of the given key using Redis’ native serialization format.

expire(ttl=None)

Expire the given key in the given number of seconds. If ttl is None, then any expiry will be cleared and key will be persisted.

pexpire(ttl=None)

Expire the given key in the given number of milliseconds. If ttl is None, then any expiry will be cleared and key will be persisted.

class walrus.Hash(Container)

Redis Hash object wrapper. Supports a dictionary-like interface with some modifications.

See Hash commands for more info.

__contains__(key)

Return a boolean valud indicating whether the given key exists.

__delitem__(key)

Delete the key from the hash.

__getitem__(item)

Retrieve the value at the given key. To retrieve multiple values at once, you can specify multiple keys as a tuple or list:

hsh = db.Hash('my-hash')
first, last = hsh['first_name', 'last_name']
__iter__()

Iterate over the items in the hash.

__len__()

Return the number of keys in the hash.

__setitem__(key, value)

Set the value of the given key.

as_dict(decode=False)

Return a dictionary containing all the key/value pairs in the hash.

incr(key, incr_by=1)

Increment the key by the given amount.

items(lazy=False)

Like Python’s dict.items() but supports an optional parameter lazy which will return a generator rather than a list.

keys()

Return the keys of the hash.

search(pattern, count=None)

Search the keys of the given hash using the specified pattern.

Parameters:
  • pattern (str) – Pattern used to match keys.

  • count (int) – Limit number of results returned.

Returns:

An iterator yielding matching key/value pairs.

update(_Hash__data=None, **kwargs)

Update the hash using the given dictionary or key/value pairs.

values()

Return the values stored in the hash.

class walrus.List(Container)

Redis List object wrapper. Supports a list-like interface.

See List commands for more info.

__delitem__(item)

By default Redis treats deletes as delete by value, as opposed to delete by index. If an integer is passed into the function, it will be treated as an index, otherwise it will be treated as a value.

If a slice is passed, then the list will be trimmed so that it ONLY contains the range specified by the slice start and stop. Note that this differs from the default behavior of Python’s list type.

__getitem__(item)

Retrieve an item from the list by index. In addition to integer indexes, you can also pass a slice.

__iter__()

Iterate over the items in the list.

__len__()

Return the length of the list.

__setitem__(idx, value)

Set the value of the given index.

append(value)

Add the given value to the end of the list.

as_list(decode=False)

Return a list containing all the items in the list.

extend(value)

Extend the list by the given value.

insert_after(value, key)

Insert the given value into the list after the index containing key.

insert_before(value, key)

Insert the given value into the list before the index containing key.

popleft()

Remove the first item from the list.

popright()

Remove the last item from the list.

prepend(value)

Add the given value to the beginning of the list.

class walrus.Set(Container)

Redis Set object wrapper. Supports a set-like interface.

See Set commands for more info.

__and__(other)

Return the set intersection of the current set and the left- hand Set object.

__contains__(item)

Return a boolean value indicating whether the given item is a member of the set.

__delitem__(item)

Remove the given item from the set.

__iter__()

Return an iterable that yields the items of the set.

__len__()

Return the number of items in the set.

__or__(other)

Return the set union of the current set and the left-hand Set object.

__sub__(other)

Return the set difference of the current set and the left- hand Set object.

add(*items)

Add the given items to the set.

as_set(decode=False)

Return a Python set containing all the items in the collection.

diffstore(dest, *others)

Store the set difference of the current set and one or more others in a new key.

Parameters:
  • dest – the name of the key to store set difference

  • others – One or more Set instances

Returns:

A Set referencing dest.

interstore(dest, *others)

Store the intersection of the current set and one or more others in a new key.

Parameters:
  • dest – the name of the key to store intersection

  • others – One or more Set instances

Returns:

A Set referencing dest.

members()

Return a set() containing the members of the set.

pop()

Remove an element from the set.

random(n=None)

Return a random member of the given set.

remove(*items)

Remove the given item(s) from the set.

search(pattern, count=None)

Search the values of the given set using the specified pattern.

Parameters:
  • pattern (str) – Pattern used to match keys.

  • count (int) – Limit number of results returned.

Returns:

An iterator yielding matching values.

unionstore(dest, *others)

Store the union of the current set and one or more others in a new key.

Parameters:
  • dest – the name of the key to store union

  • others – One or more Set instances

Returns:

A Set referencing dest.

class walrus.ZSet(Container)

Redis ZSet object wrapper. Acts like a set and a dictionary.

See Sorted set commands for more info.

__contains__(item)

Return a boolean indicating whether the given item is in the sorted set.

__delitem__(item)

Delete the given item(s) from the set. Like __getitem__(), this method supports a wide variety of indexing and slicing options.

__getitem__(item)

Retrieve the given values from the sorted set. Accepts a variety of parameters for the input:

zs = db.ZSet('my-zset')

# Return the first 10 elements with their scores.
zs[:10, True]

# Return the first 10 elements without scores.
zs[:10]
zs[:10, False]

# Return the range of values between 'k1' and 'k10' along
# with their scores.
zs['k1':'k10', True]

# Return the range of items preceding and including 'k5'
# without scores.
zs[:'k5', False]
__iter__()

Return an iterator that will yield (item, score) tuples.

__len__()

Return the number of items in the sorted set.

__setitem__(item, score)

Add item to the set with the given score.

add(_mapping=None, **kwargs)

Add the given item/score pairs to the ZSet. Arguments are specified as a dictionary of item: score, or as keyword arguments.

as_items(decode=False)

Return a list of 2-tuples consisting of key/score.

bpopmax(timeout=0)

Atomically remove the highest-scoring item from the set, blocking until an item becomes available or timeout is reached (0 for no timeout, default).

Returns a 2-tuple of (item, score).

bpopmin(timeout=0)

Atomically remove the lowest-scoring item from the set, blocking until an item becomes available or timeout is reached (0 for no timeout, default).

Returns a 2-tuple of (item, score).

count(low, high=None)

Return the number of items between the given bounds.

incr(key, incr_by=1.0)

Increment the score of an item in the ZSet.

Parameters:
  • key – Item to increment.

  • incr_by – Amount to increment item’s score.

interstore(dest, *others, **kwargs)

Store the intersection of the current zset and one or more others in a new key.

Parameters:
  • dest – the name of the key to store intersection

  • others – One or more ZSet instances

Returns:

A ZSet referencing dest.

lex_count(low, high)

Count the number of members in a sorted set between a given lexicographical range.

popmax(count=1)

Atomically remove the highest-scoring item(s) in the set.

Returns:

a list of item, score tuples or None if the set is empty.

popmax_compat(count=1)

Atomically remove the highest-scoring item(s) in the set. Compatible with Redis versions < 5.0.

Returns:

a list of item, score tuples or None if the set is empty.

popmin(count=1)

Atomically remove the lowest-scoring item(s) in the set.

Returns:

a list of item, score tuples or None if the set is empty.

popmin_compat(count=1)

Atomically remove the lowest-scoring item(s) in the set. Compatible with Redis versions < 5.0.

Returns:

a list of item, score tuples or None if the set is empty.

range(low, high, with_scores=False, desc=False, reverse=False)

Return a range of items between low and high. By default scores will not be included, but this can be controlled via the with_scores parameter.

Parameters:
  • low – Lower bound.

  • high – Upper bound.

  • with_scores (bool) – Whether the range should include the scores along with the items.

  • desc (bool) – Whether to sort the results descendingly.

  • reverse (bool) – Whether to select the range in reverse.

range_by_lex(low, high, start=None, num=None, reverse=False)

Return a range of members in a sorted set, by lexicographical range.

rank(item, reverse=False)

Return the rank of the given item.

remove(*items)

Remove the given items from the ZSet.

remove_by_rank(low, high=None)

Remove elements from the ZSet by their rank (relative position).

Parameters:
  • low – Lower bound.

  • high – Upper bound.

remove_by_score(low, high=None)

Remove elements from the ZSet by their score.

Parameters:
  • low – Lower bound.

  • high – Upper bound.

score(item)

Return the score of the given item.

search(pattern, count=None)

Search the set, returning items that match the given search pattern.

Parameters:
  • pattern (str) – Search pattern using wildcards.

  • count (int) – Limit result set size.

Returns:

Iterator that yields matching item/score tuples.

unionstore(dest, *others, **kwargs)

Store the union of the current set and one or more others in a new key.

Parameters:
  • dest – the name of the key to store union

  • others – One or more ZSet instances

Returns:

A ZSet referencing dest.

class walrus.HyperLogLog(Container)

Redis HyperLogLog object wrapper.

See HyperLogLog commands for more info.

add(*items)

Add the given items to the HyperLogLog.

merge(dest, *others)

Merge one or more HyperLogLog instances.

Parameters:
  • dest – Key to store merged result.

  • others – One or more HyperLogLog instances.

class walrus.Array(Container)

Custom container that emulates an array (as opposed to the linked-list implementation of List). This gives:

  • O(1) append, get, len, pop last, set

  • O(n) remove from middle

Array is built on top of the hash data type and is implemented using lua scripts.

__contains__(item)

Return a boolean indicating whether the given item is stored in the array. O(n).

__delitem__(idx)

Delete the given index.

__getitem__(idx)

Get the value stored in the given index.

__iter__()

Return an iterable that yields array items.

__len__()

Return the number of items in the array.

__setitem__(idx, value)

Set the value at the given index.

append(value)

Append a new value to the end of the array.

as_list(decode=False)

Return a list of items in the array.

extend(values)

Extend the array, appending the given values.

pop(idx=None)

Remove an item from the array. By default this will be the last item by index, but any index can be specified.

class walrus.Stream(Container)

Redis stream container.

__delitem__(item)

Delete one or more messages by id. The index can be either a single message id or a list/tuple of multiple ids.

__getitem__(item)

Read a range of values from a stream.

The index must be a message id or a slice. An empty slice will result in reading all values from the stream. Message ids provided as lower or upper bounds are inclusive.

To specify a maximum number of messages, use the “step” parameter of the slice.

__len__()

Return the length of a stream.

add(data, id='*', maxlen=None, approximate=True)

Add data to a stream.

Parameters:
  • data (dict) – data to add to stream

  • id – identifier for message (‘*’ to automatically append)

  • maxlen – maximum length for stream

  • approximate – allow stream max length to be approximate

Returns:

the added message id.

consumers_info(group)

Retrieve information about consumers within the given consumer group operating on the stream. Calls xinfo_consumers().

Parameters:

group – consumer group name

Returns:

a dictionary containing consumer metadata

delete(*id_list)

Delete one or more message by id. The index can be either a single message id or a list/tuple of multiple ids.

get(docid)

Get a message by id.

Parameters:

docid – the message id to retrieve.

Returns:

a 2-tuple of (message id, data) or None if not found.

groups_info()

Retrieve information about consumer groups for the stream. Wraps call to xinfo_groups().

Returns:

a dictionary containing consumer group metadata

info()

Retrieve information about the stream. Wraps call to xinfo_stream().

Returns:

a dictionary containing stream metadata

range(start='-', stop='+', count=None)

Read a range of values from a stream.

Parameters:
  • start – start key of range (inclusive) or ‘-’ for oldest message

  • stop – stop key of range (inclusive) or ‘+’ for newest message

  • count – limit number of messages returned

read(count=None, block=None, last_id=None)

Monitor stream for new data.

Parameters:
  • count (int) – limit number of messages returned

  • block (int) – milliseconds to block, 0 for indefinitely

  • last_id – Last id read (an exclusive lower-bound). If the ‘$’ value is given, we will only read values added after our command started blocking.

Returns:

a list of (message id, data) 2-tuples.

set_id(id)

Set the maximum message id for the stream.

Parameters:

id – id of last-read message

trim(count=None, approximate=True, minid=None, limit=None)

Trim the stream to the given “count” of messages, discarding the oldest messages first.

Parameters:
  • count – maximum size of stream (maxlen)

  • approximate – allow size to be approximate

  • minid – evicts entries with IDs lower than the given min id.

  • limit – maximum number of entries to evict.

class walrus.ConsumerGroup(database, name, keys, consumer=None)

Helper for working with Redis Streams consumer groups functionality. Each stream associated with the consumer group is exposed as a special attribute of the ConsumerGroup object, exposing stream-specific functionality within the context of the group.

Rather than creating this class directly, use the Database.consumer_group() method.

Each registered stream within the group is exposed as a special attribute that provides stream-specific APIs within the context of the group. For more information see ConsumerGroupStream.

The streams managed by a consumer group must exist before the consumer group can be created. By default, calling ConsumerGroup.create() will automatically create stream keys for any that do not exist.

Example:

cg = db.consumer_group('groupname', ['stream-1', 'stream-2'])
cg.create()  # Create consumer group.
cg.stream_1  # ConsumerGroupStream for "stream-1"
cg.stream_2  # ConsumerGroupStream for "stream-2"
# or, alternatively:
cg.streams['stream-1']
Parameters:
  • database (Database) – Redis client

  • name – consumer group name

  • keys – stream identifier(s) to monitor. May be a single stream key, a list of stream keys, or a key-to-minimum id mapping. The minimum id for each stream should be considered an exclusive lower-bound. The ‘$’ value can also be used to only read values added after our command started blocking.

  • consumer – name for consumer

consumer(name)

Create a new consumer for the ConsumerGroup.

Parameters:

name – name of consumer

Returns:

a ConsumerGroup using the given consumer name.

create(ensure_keys_exist=True, mkstream=False)

Create the consumer group and register it with the group’s stream keys.

Parameters:
  • ensure_keys_exist – Ensure that the streams exist before creating the consumer group. Streams that do not exist will be created.

  • mkstream – Use the “MKSTREAM” option to ensure stream exists (may require unstable version of Redis).

destroy()

Destroy the consumer group.

read(count=None, block=None, consumer=None)

Read unseen messages from all streams in the consumer group. Wrapper for Database.xreadgroup method.

Parameters:
  • count (int) – limit number of messages returned

  • block (int) – milliseconds to block, 0 for indefinitely.

  • consumer – consumer name

Returns:

a list of (stream key, messages) tuples, where messages is a list of (message id, data) 2-tuples.

reset()

Reset the consumer group, clearing the last-read status for each stream so it will read from the beginning of each stream.

set_id(id='$')

Set the last-read message id for each stream in the consumer group. By default, this will be the special “$” identifier, meaning all messages are marked as having been read.

Parameters:

id – id of last-read message (or “$”).

stream_info()

Retrieve information for each stream managed by the consumer group. Calls xinfo_stream() for each stream.

Returns:

a dictionary mapping stream key to a dictionary of metadata

class walrus.containers.ConsumerGroupStream(Stream)

Helper for working with an individual stream within the context of a consumer group. This object is exposed as an attribute on a ConsumerGroup object using the stream key for the attribute name.

This class should not be created directly. It will automatically be added to the ConsumerGroup object.

For example:

cg = db.consumer_group('groupname', ['stream-1', 'stream-2'])
cg.stream_1  # ConsumerGroupStream for "stream-1"
cg.stream_2  # ConsumerGroupStream for "stream-2"
ack(*id_list)

Acknowledge that the message(s) were been processed by the consumer associated with the parent ConsumerGroup.

Parameters:

id_list – one or more message ids to acknowledge

Returns:

number of messages marked acknowledged

autoclaim(consumer, min_idle_time, start_id=0, count=None, justid=False)

Transfer ownership of pending stream entries that match the specified criteria. Similar to calling XPENDING and XCLAIM, but provides a more straightforward way to deal with message delivery failures.

Parameters:
  • consumer – name of consumer that claims the message.

  • min_idle_time – in milliseconds

  • start_id – start id

  • count – optional, upper limit of entries to claim. Default 100.

  • justid – return just IDs of messages claimed.

Returns:

[next start id, [messages that were claimed]

claim(*id_list, **kwargs)

Claim pending - but unacknowledged - messages for this stream within the context of the parent ConsumerGroup.

Parameters:
  • id_list – one or more message ids to acknowledge

  • min_idle_time – minimum idle time in milliseconds (keyword-arg).

Returns:

list of (message id, data) 2-tuples of messages that were successfully claimed

consumers_info()

Retrieve information about consumers within the given consumer group operating on the stream. Calls xinfo_consumers().

Returns:

a list of dictionaries containing consumer metadata

pending(start='-', stop='+', count=1000, consumer=None, idle=None)

List pending messages within the consumer group for this stream.

Parameters:
  • start – start id (or ‘-’ for oldest pending)

  • stop – stop id (or ‘+’ for newest pending)

  • count – limit number of messages returned

  • consumer – restrict message list to the given consumer

  • idle (int) – filter by idle-time in milliseconds (6.2)

Returns:

A list containing status for each pending message. Each pending message returns [id, consumer, idle time, deliveries].

read(count=None, block=None, last_id=None)

Monitor the stream for new messages within the context of the parent ConsumerGroup.

Parameters:
  • count (int) – limit number of messages returned

  • block (int) – milliseconds to block, 0 for indefinitely.

  • last_id (str) – optional last ID, by default uses the special token “>”, which reads the oldest unread message.

Returns:

a list of (message id, data) 2-tuples.

set_id(id='$')

Set the last-read message id for the stream within the context of the parent ConsumerGroup. By default this will be the special “$” identifier, meaning all messages are marked as having been read.

Parameters:

id – id of last-read message (or “$”).

class walrus.BitField(Container)

Wrapper that provides a convenient API for constructing and executing Redis BITFIELD commands. The BITFIELD command can pack multiple operations into a single logical command, so the BitField supports a method- chaining API that allows multiple operations to be performed atomically.

Rather than instantiating this class directly, you should use the Database.bit_field() method to obtain a BitField.

__delitem__(item)

Clear a range of bits in a bitfield. Note that the item must be a slice specifying the start and end of the range of bits to clear.

__getitem__(item)

Short-hand for getting a range of bits in a bitfield. Note that the item must be a slice specifying the start and end of the range of bits to read.

__setitem__(item, value)

Short-hand for setting a range of bits in a bitfield. Note that the item must be a slice specifying the start and end of the range of bits to read. If the value representation exceeds the number of bits implied by the slice range, a ValueError is raised.

bit_count(start=None, end=None)

Count the set bits in a string. Note that the start and end parameters are offsets in bytes.

get(fmt, offset)

Get the value of a given bitfield.

Parameters:
  • fmt – format-string for the bitfield being read, e.g. u8 for an unsigned 8-bit integer.

  • offset (int) – offset (in number of bits).

Returns:

a BitFieldOperation instance.

get_bit(offset)

Get the bit value at the given offset (in bits).

Parameters:

offset (int) – bit offset

Returns:

value at bit offset, 1 or 0

get_raw()

Return the raw bytestring that comprises the bitfield. Equivalent to a normal GET command.

incrby(fmt, offset, increment, overflow=None)

Increment a bitfield by a given amount.

Parameters:
  • fmt – format-string for the bitfield being updated, e.g. u8 for an unsigned 8-bit integer.

  • offset (int) – offset (in number of bits).

  • increment (int) – value to increment the bitfield by.

  • overflow (str) – overflow algorithm. Defaults to WRAP, but other acceptable values are SAT and FAIL. See the Redis docs for descriptions of these algorithms.

Returns:

a BitFieldOperation instance.

set(fmt, offset, value)

Set the value of a given bitfield.

Parameters:
  • fmt – format-string for the bitfield being read, e.g. u8 for an unsigned 8-bit integer.

  • offset (int) – offset (in number of bits).

  • value (int) – value to set at the given position.

Returns:

a BitFieldOperation instance.

set_bit(offset, value)

Set the bit value at the given offset (in bits).

Parameters:
  • offset (int) – bit offset

  • value (int) – new value for bit, 1 or 0

Returns:

previous value at bit offset, 1 or 0

set_raw(value)

Set the raw bytestring that comprises the bitfield. Equivalent to a normal SET command.

class walrus.containers.BitFieldOperation(database, key)

Command builder for BITFIELD commands.

__iter__()

Implicit execution and iteration of the return values for a sequence of operations.

execute()

Execute the operation(s) in a single BITFIELD command. The return value is a list of values corresponding to each operation.

get(fmt, offset)

Get the value of a given bitfield.

Parameters:
  • fmt – format-string for the bitfield being read, e.g. u8 for an unsigned 8-bit integer.

  • offset (int) – offset (in number of bits).

Returns:

a BitFieldOperation instance.

incrby(fmt, offset, increment, overflow=None)

Increment a bitfield by a given amount.

Parameters:
  • fmt – format-string for the bitfield being updated, e.g. u8 for an unsigned 8-bit integer.

  • offset (int) – offset (in number of bits).

  • increment (int) – value to increment the bitfield by.

  • overflow (str) – overflow algorithm. Defaults to WRAP, but other acceptable values are SAT and FAIL. See the Redis docs for descriptions of these algorithms.

Returns:

a BitFieldOperation instance.

set(fmt, offset, value)

Set the value of a given bitfield.

Parameters:
  • fmt – format-string for the bitfield being read, e.g. u8 for an unsigned 8-bit integer.

  • offset (int) – offset (in number of bits).

  • value (int) – value to set at the given position.

Returns:

a BitFieldOperation instance.

class walrus.BloomFilter(Container)

Bloom-filters are probabilistic data-structures that are used to answer the question: “is X a member of set S?” It is possible to receive a false positive, but impossible to receive a false negative (in other words, if the bloom filter contains a value, it will never erroneously report that it does not contain such a value). The accuracy of the bloom-filter and the likelihood of a false positive can be reduced by increasing the size of the bloomfilter. The default size is 64KB (or 524,288 bits).

Rather than instantiate this class directly, use Database.bloom_filter().

__contains__(data)

Check if an item has been added to the bloomfilter.

Parameters:

data (bytes) – a bytestring representing the item to check.

Returns:

a boolean indicating whether or not the item is present in the bloomfilter. False-positives are possible, but a negative return value is definitive.

add(data)

Add an item to the bloomfilter.

Parameters:

data (bytes) – a bytestring representing the item to add.

contains(data)

Check if an item has been added to the bloomfilter.

Parameters:

data (bytes) – a bytestring representing the item to check.

Returns:

a boolean indicating whether or not the item is present in the bloomfilter. False-positives are possible, but a negative return value is definitive.

High-level APIs

class walrus.Autocomplete(database, namespace='walrus', cache_timeout=600, stopwords_file='stopwords.txt', use_json=True)

Autocompletion for ascii-encoded string data. Titles are stored, along with any corollary data in Redis. Substrings of the title are stored in sorted sets using a unique scoring algorithm. The scoring algorithm aims to return results in a sensible order, by looking at the entire title and the position of the matched substring within the title.

Additionally, the autocomplete object supports boosting search results by object ID or object type.

__init__(database, namespace='walrus', cache_timeout=600, stopwords_file='stopwords.txt', use_json=True)
Parameters:
  • database – A Database instance.

  • namespace – Namespace to prefix keys used to store metadata.

  • cache_timeout – Complex searches using boosts will be cached. Specify the amount of time these results are cached for.

  • stopwords_file – Filename containing newline-separated stopwords. Set to None to disable stopwords filtering.

  • use_json (bool) – Whether object data should be serialized as JSON.

boost_object(obj_id=None, obj_type=None, multiplier=1.1, relative=True)

Boost search results for the given object or type by the amount specified. When the multiplier is greater than 1, the results will percolate to the top. Values between 0 and 1 will percolate results to the bottom.

Either an obj_id or obj_type (or both) must be specified.

Parameters:
  • obj_id – An object’s unique identifier (optional).

  • obj_type – The object’s type (optional).

  • multiplier – A positive floating-point number.

  • relative – If True, then any pre-existing saved boost will be updated using the given multiplier.

Examples:

# Make all objects of type=photos percolate to top.
ac.boost_object(obj_type='photo', multiplier=2.0)

# Boost a particularly popular blog entry.
ac.boost_object(
    popular_entry.id,
    'entry',
    multipler=5.0,
    relative=False)
exists(obj_id, obj_type=None)

Return whether the given object exists in the search index.

Parameters:
  • obj_id – The object’s unique identifier.

  • obj_type – The object’s type.

flush(batch_size=1000)

Delete all autocomplete indexes and metadata.

list_data()

Return all the data stored in the autocomplete index. If the data was stored as serialized JSON, then it will be de-serialized before being returned.

Return type:

list

list_titles()

Return the titles of all objects stored in the autocomplete index.

Return type:

list

remove(obj_id, obj_type=None)

Remove an object identified by the given obj_id (and optionally obj_type) from the search index.

Parameters:
  • obj_id – The object’s unique identifier.

  • obj_type – The object’s type.

search(phrase, limit=None, boosts=None, chunk_size=1000)

Perform a search for the given phrase. Objects whose title matches the search will be returned. The values returned will be whatever you specified as the data parameter when you called store().

Parameters:
  • phrase – One or more words or substrings.

  • limit (int) – Limit size of the result set.

  • boosts (dict) – A mapping of object id/object type to floating point multipliers.

Returns:

A list containing the object data for objects matching the search phrase.

store(obj_id, title=None, data=None, obj_type=None)

Store data in the autocomplete index.

Parameters:
  • obj_id – Either a unique identifier for the object being indexed or the word/phrase to be indexed.

  • title – The word or phrase to be indexed. If not provided, the obj_id will be used as the title.

  • data – Arbitrary data to index, which will be returned when searching for results. If not provided, this value will default to the title being indexed.

  • obj_type – Optional object type. Since results can be boosted by type, you might find it useful to specify this when storing multiple types of objects.

You have the option of storing several types of data as defined by the parameters. At the minimum, you can specify an obj_id, which will be the word or phrase you wish to index. Alternatively, if for instance you were indexing blog posts, you might specify all parameters.

class walrus.Cache(database, name='cache', default_timeout=None, debug=False)

Cache implementation with simple get/set operations, and a decorator.

__init__(database, name='cache', default_timeout=None, debug=False)
Parameters:
  • databaseDatabase instance.

  • name – Namespace for this cache.

  • default_timeout (int) – Default cache timeout.

  • debug – Disable cache for debugging purposes. Cache will no-op.

cache_async(key_fn=<function Cache._key_fn>, timeout=3600)

Decorator that will execute the cached function in a separate thread. The function will immediately return, returning a callable to the user. This callable can be used to check for a return value.

For details, see the Cache Asynchronously section of the docs.

Parameters:
  • key_fn – Function used to generate cache key.

  • timeout (int) – Cache timeout in seconds.

Returns:

A new function which can be called to retrieve the return value of the decorated function.

cached(key_fn=<function Cache._key_fn>, timeout=None, metrics=False)

Decorator that will transparently cache calls to the wrapped function. By default, the cache key will be made up of the arguments passed in (like memoize), but you can override this by specifying a custom key_fn.

Parameters:
  • key_fn – Function used to generate a key from the given args and kwargs.

  • timeout – Time to cache return values.

  • metrics – Keep stats on cache utilization and timing.

Returns:

Return the result of the decorated function call with the given args and kwargs.

Usage:

cache = Cache(my_database)

@cache.cached(timeout=60)
def add_numbers(a, b):
    return a + b

print add_numbers(3, 4)  # Function is called.
print add_numbers(3, 4)  # Not called, value is cached.

add_numbers.bust(3, 4)  # Clear cache for (3, 4).
print add_numbers(3, 4)  # Function is called.

The decorated function also gains a new attribute named bust which will clear the cache for the given args.

cached_property(key_fn=<function Cache._key_fn>, timeout=None)

Decorator that will transparently cache calls to the wrapped method. The method will be exposed as a property.

Usage:

cache = Cache(my_database)

class Clock(object):
    @cache.cached_property()
    def now(self):
        return datetime.datetime.now()

clock = Clock()
print clock.now
delete(key)

Remove the given key from the cache.

delete_many(keys)

Delete multiple keys from the cache in one operation.

Parameters:

keys (list) – keys to delete.

Returns:

number of keys removed.

flush()

Remove all cached objects from the database.

get(key, default=None)

Retreive a value from the cache. In the event the value does not exist, return the default.

get_many(keys)

Retrieve multiple values from the cache. Missing keys are not included in the result dictionary.

Parameters:

keys (list) – list of keys to fetch.

Returns:

dictionary mapping keys to cached values.

keys()

Return all keys for cached values.

set(key, value, timeout=None)

Cache the given value in the specified key. If no timeout is specified, the default timeout will be used.

set_many(_Cache__data=None, timeout=None, **kwargs)

Set multiple key/value pairs in one operation.

Parameters:
  • __data (dict) – provide data as dictionary of key/value pairs.

  • timeout – optional timeout for data.

  • kwargs – alternatively, provide data as keyword arguments.

Returns:

True on success.

class walrus.Counter(database, name)

Simple counter.

__init__(database, name)
Parameters:
  • database – A walrus Database instance.

  • name (str) – The name for the counter.

class walrus.Index(db, name, **tokenizer_settings)

Full-text search index.

Store documents, along with arbitrary metadata, and perform full-text search on the document content. Supports porter-stemming, stopword filtering, basic result ranking, and (optionally) double-metaphone for phonetic search.

__init__(db, name, **tokenizer_settings)
Parameters:
  • db (Database) – a walrus database object.

  • name (str) – name for the search index.

  • stemmer (bool) – use porter stemmer (default True).

  • metaphone (bool) – use double metaphone (default False).

  • stopwords_file (str) – defaults to walrus stopwords.txt.

  • min_word_length (int) – specify minimum word length.

Create a search index for storing and searching documents.

add(key, content, _Index__metadata=None, **metadata)
Parameters:
  • key – Document unique identifier.

  • content (str) – Content to store and index for search.

  • metadata – Arbitrary key/value pairs to store for document.

Add a document to the search index.

get_document(document_id)
Parameters:

document_id – Document unique identifier.

Returns:

a dictionary containing the document content and any associated metadata.

remove(key, preserve_data=False)
Parameters:

key – Document unique identifier.

Remove the document from the search index.

replace(key, content, _Index__metadata=None, **metadata)
Parameters:
  • key – Document unique identifier.

  • content (str) – Content to store and index for search.

  • metadata – Arbitrary key/value pairs to store for document.

Update the given document. Existing metadata will not be removed and replaced with the provided metadata.

search(query)
Parameters:

query (str) – Search query. May contain boolean/set operations and parentheses.

Returns:

a list of document hashes corresponding to matching documents.

Search the index. The return value is a list of dictionaries corresponding to the documents that matched. These dictionaries contain a content key with the original indexed content, along with any additional metadata that was specified.

search_items(query)
Parameters:

query (str) – Search query. May contain boolean/set operations and parentheses.

Returns:

a list of (key, document hashes) tuples corresponding to matching documents.

Search the index. The return value is a list of (key, document dict) corresponding to the documents that matched. These dictionaries contain a content key with the original indexed content, along with any additional metadata that was specified.

update(key, content, _Index__metadata=None, **metadata)
Parameters:
  • key – Document unique identifier.

  • content (str) – Content to store and index for search.

  • metadata – Arbitrary key/value pairs to store for document.

Update the given document. Existing metadata will be preserved and, optionally, updated with the provided metadata.

class walrus.Graph(walrus, namespace)

Simple hexastore built using Redis ZSets. The basic idea is that we have a collection of relationships of the form subject-predicate-object. For example:

  • charlie – friends – huey

  • charlie – lives – Kansas

  • huey – lives – Kansas

We might wish to ask questions of our data-store like “which of charlie’s friends live in Kansas?” To do this we will store every permutation of the S-P-O triples, then we can do efficient queries using the parts of the relationship we know:

  • query the “object” portion of the “charlie – friends” subject and predicate.

  • for each object returned, turn it into the subject of a second query whose predicate is “lives” and whose object is “Kansas”

So we would return the subjects that satisfy the following expression:

("charlie -- friends") -- lives -- Kansas.

To accomplish this in Python we could write:

db = Database()
graph = db.graph('people')

# Store my friends.
graph.store_many(
    ('charlie', 'friends', 'huey'),
    ('charlie', 'friends', 'zaizee'),
    ('charlie', 'friends', 'nuggie'))

# Store where people live.
graph.store_many(
    ('huey', 'lives', 'Kansas'),
    ('zaizee', 'lives', 'Missouri'),
    ('nuggie', 'lives', 'Kansas'),
    ('mickey', 'lives', 'Kansas'))

# Perform our search. We will use a variable (X) to indicate the
# value we're interested in.
X = graph.v.X  # Create a variable placeholder.

# In the first clause we indicate we are searching for my friends.
# In the second clause, we only want those friends who also live in
# Kansas.
results = graph.search(
    {'s': 'charlie', 'p': 'friends', 'o': X},
    {'s': X, 'p': 'lives', 'o': 'Kansas'})
print results

# Prints: {'X': {'huey', 'nuggie'}}

See: http://redis.io/topics/indexes#representing-and-querying-graphs-using-an-hexastore

__init__(walrus, namespace)
delete(s, p, o)

Remove the given subj-pred-obj triple from the database.

query(s=None, p=None, o=None)

Return all triples that satisfy the given expression. You may specify all or none of the fields (s, p, and o). For instance, if I wanted to query for all the people who live in Kansas, I might write:

for triple in graph.query(p='lives', o='Kansas'):
    print triple['s'], 'lives in Kansas!'
search(*conditions)

Given a set of conditions, return all values that satisfy the conditions for a given set of variables.

For example, suppose I wanted to find all of my friends who live in Kansas:

X = graph.v.X
results = graph.search(
    {'s': 'charlie', 'p': 'friends', 'o': X},
    {'s': X, 'p': 'lives', 'o': 'Kansas'})

The return value consists of a dictionary keyed by variable, whose values are set objects containing the values that satisfy the query clauses, e.g.:

print results

# Result has one key, for our "X" variable. The value is the set
# of my friends that live in Kansas.
# {'X': {'huey', 'nuggie'}}

# We can assume the following triples exist:
# ('charlie', 'friends', 'huey')
# ('charlie', 'friends', 'nuggie')
# ('huey', 'lives', 'Kansas')
# ('nuggie', 'lives', 'Kansas')
store(s, p, o)

Store a subject-predicate-object triple in the database.

store_many(items)

Store multiple subject-predicate-object triples in the database.

Parameters:

items – A list of (subj, pred, obj) 3-tuples.

v(name)

Create a named variable, used to construct multi-clause queries with the Graph.search() method.

class walrus.Lock(database, name, ttl=None, lock_id=None)

Lock implementation. Can also be used as a context-manager or decorator.

Unlike the redis-py lock implementation, this Lock does not use a spin-loop when blocking to acquire the lock. Instead, it performs a blocking pop on a list. When a lock is released, a value is pushed into this list, signalling that the lock is available.

Warning

The event list for each lock persists indefinitely unless removed using Lock.clear() or otherwise manually in the Redis database. For this reason, be cautious when creating locks dynamically, or your keyspace might grow in an unbounded way.

The lock uses Lua scripts to ensure the atomicity of its operations.

You can set a TTL on a lock to reduce the potential for deadlocks in the event of a crash. If a lock is not released before it exceeds its TTL, and threads that are blocked waiting for the lock could potentially re-acquire it.

Note

TTL is specified in milliseconds.

Locks can be used as context managers or as decorators:

lock = db.lock('my-lock')

with lock:
    perform_some_calculations()

@lock
def another_function():
    # The lock will be acquired when this function is
    # called, and released when the function returns.
    do_some_more_calculations()
__init__(database, name, ttl=None, lock_id=None)
Parameters:
  • database – A walrus Database instance.

  • name (str) – The name for the lock.

  • ttl (int) – The time-to-live for the lock in milliseconds.

  • lock_id (str) – Unique identifier for the lock instance.

acquire(block=True)

Acquire the lock. The lock will be held until it is released by calling Lock.release(). If the lock was initialized with a ttl, then the lock will be released automatically after the given number of milliseconds.

By default this method will block until the lock becomes free (either by being released or expiring). The blocking is accomplished by performing a blocking left-pop on a list, as opposed to a spin-loop.

If you specify block=False, then the method will return False if the lock could not be acquired.

Parameters:

block (bool) – Whether to block while waiting to acquire the lock.

Returns:

Returns True if the lock was acquired.

clear()

Clear the lock, allowing it to be acquired. Do not use this method except to recover from a deadlock. Otherwise you should use Lock.release().

release()

Release the lock.

Returns:

Returns True if the lock was released.

class walrus.Model(*args, **kwargs)

A collection of fields to be stored in the database. Walrus stores model instance data in hashes keyed by a combination of model name and primary key value. Instance attributes are automatically converted to values suitable for storage in Redis (i.e., datetime becomes timestamp), and vice-versa.

Additionally, model fields can be indexed, which allows filtering. There are three types of indexes:

  • Absolute

  • Scalar

  • Full-text search

Absolute indexes are used for values like strings or UUIDs and support only equality and inequality checks.

Scalar indexes are for numeric values as well as datetimes, and support equality, inequality, and greater or less-than.

The final type of index, FullText, can only be used with the TextField. FullText indexes allow search using the match() method. For more info, see Full-text search.

__database__ = None

Required: the Database instance to use to persist model data.

__init__(*args, **kwargs)
__namespace__ = None

Optional: namespace to use for model data.

classmethod all()

Return an iterator that successively yields saved model instances. Models are saved in an unordered Set, so the iterator will return them in arbitrary order.

Example:

for note in Note.all():
    print note.content

To return models in sorted order, see Model.query(). Example returning all records, sorted newest to oldest:

for note in Note.query(order_by=Note.timestamp.desc()):
    print note.timestamp, note.content
classmethod count()

Return the number of objects in the given collection.

classmethod create(**kwargs)

Create a new model instance and save it to the database. Values are passed in as keyword arguments.

Example:

user = User.create(first_name='Charlie', last_name='Leifer')
delete(for_update=False)

Delete the given model instance.

classmethod get(expression)

Retrieve the model instance matching the given expression. If the number of matching results is not equal to one, then a ValueError will be raised.

Parameters:

expression – A boolean expression to filter by.

Returns:

The matching Model instance.

Raises:

ValueError if result set size is not 1.

incr(field, incr_by=1)

Increment the value stored in the given field by the specified amount. Any indexes will be updated at the time incr() is called.

Parameters:
  • field (Field) – A field instance.

  • incr_by – An int or float.

Example:

# Retrieve a page counter object for the given URL.
page_count = PageCounter.get(PageCounter.url == url)

# Update the hit count, persisting to the database and
# updating secondary indexes in one go.
page_count.incr(PageCounter.hits)
index_separator = '.'

Required: character to use as a delimiter for indexes, default “.”

classmethod load(primary_key, convert_key=True)

Retrieve a model instance by primary key.

Parameters:

primary_key – The primary key of the model instance.

Returns:

Corresponding Model instance.

Raises:

KeyError if object with given primary key does not exist.

classmethod query(expression=None, order_by=None)

Return model instances matching the given expression (if specified). Additionally, matching instances can be returned sorted by field value.

Example:

# Get administrators sorted by username.
admin_users = User.query(
    (User.admin == True),
    order_by=User.username)

# List blog entries newest to oldest.
entries = Entry.query(order_by=Entry.timestamp.desc())

# Perform a complex filter.
values = StatData.query(
    (StatData.timestamp < datetime.date.today()) &
    ((StatData.type == 'pv') | (StatData.type == 'cv')))
Parameters:
  • expression – A boolean expression to filter by.

  • order_by – A field whose value should be used to sort returned instances.

classmethod query_delete(expression=None)

Delete model instances matching the given expression (if specified). If no expression is provided, then all model instances will be deleted.

Parameters:

expression – A boolean expression to filter by.

save(_is_create=False)

Save the given model instance. If the model does not have a primary key value, Walrus will call the primary key field’s generate_key() method to attempt to generate a suitable value.

to_hash()

Return a Hash instance corresponding to the raw model data.

class walrus.RateLimit(database, name, limit=5, per=60, debug=False)

Rate limit implementation. Allows up to “limit” number of events every per the given number of seconds.

__init__(database, name, limit=5, per=60, debug=False)
Parameters:
  • databaseDatabase instance.

  • name – Namespace for this cache.

  • limit (int) – Number of events allowed during a given time period.

  • per (int) – Time period the limit applies to, in seconds.

  • debug – Disable rate-limit for debugging purposes. All events will appear to be allowed and valid.

limit(key)

Function to log an event with the given key. If the key has not exceeded their allotted events, then the function returns False to indicate that no limit is being imposed.

If the key has exceeded the number of events, then the function returns True indicating rate-limiting should occur.

Parameters:

key (str) – A key identifying the source of the event.

Returns:

Boolean indicating whether the event should be rate-limited or not.

rate_limited(key_function=None)

Function or method decorator that will prevent calls to the decorated function when the number of events has been exceeded for the given time period.

It is probably important that you take care to choose an appropriate key function. For instance, if rate-limiting a web-page you might use the requesting user’s IP as the key.

If the number of allowed events has been exceeded, a RateLimitException will be raised.

Parameters:

key_function – Function that accepts the params of the decorated function and returns a string key. If not provided, a hash of the args and kwargs will be used.

Returns:

If the call is not rate-limited, then the return value will be that of the decorated function.

Raises:

RateLimitException.

class walrus.TimeSeries(ConsumerGroup)

TimeSeries is a consumer-group that provides a higher level of abstraction, reading and writing message ids as datetimes, and returning messages using a convenient, lightweight Message class.

Rather than creating this class directly, use the Database.time_series() method.

Each registered stream within the group is exposed as a special attribute that provides stream-specific APIs within the context of the group. For more information see TimeSeriesStream.

Example:

ts = db.time_series('groupname', ['stream-1', 'stream-2'])
ts.stream_1  # TimeSeriesStream for "stream-1"
ts.stream_2  # TimeSeriesStream for "stream-2"
Parameters:
  • database (Database) – Redis client

  • group – name of consumer group

  • keys – stream identifier(s) to monitor. May be a single stream key, a list of stream keys, or a key-to-minimum id mapping. The minimum id for each stream should be considered an exclusive lower-bound. The ‘$’ value can also be used to only read values added after our command started blocking.

  • consumer – name for consumer within group

Returns:

a TimeSeries instance

consumer(name)

Create a new consumer for the ConsumerGroup.

Parameters:

name – name of consumer

Returns:

a ConsumerGroup using the given consumer name.

create(ensure_keys_exist=True, mkstream=False)

Create the consumer group and register it with the group’s stream keys.

Parameters:
  • ensure_keys_exist – Ensure that the streams exist before creating the consumer group. Streams that do not exist will be created.

  • mkstream – Use the “MKSTREAM” option to ensure stream exists (may require unstable version of Redis).

destroy()

Destroy the consumer group.

read(count=None, block=None)

Read unseen messages from all streams in the consumer group. Wrapper for Database.xreadgroup method.

Parameters:
  • count (int) – limit number of messages returned

  • block (int) – milliseconds to block, 0 for indefinitely.

Returns:

a list of Message objects

reset()

Reset the consumer group, clearing the last-read status for each stream so it will read from the beginning of each stream.

set_id(id='$')

Set the last-read message id for each stream in the consumer group. By default, this will be the special “$” identifier, meaning all messages are marked as having been read.

Parameters:

id – id of last-read message (or “$”).

Field types

class walrus.Field(index=False, primary_key=False, default=None)

Named attribute on a model that will hold a value of the given type. Fields are declared as attributes on a model class.

Example:

walrus_db = Database()

class User(Model):
    __database__ = walrus_db
    __namespace__ = 'my-app'

    # Use the user's email address as the primary key.
    # All primary key fields will also get a secondary
    # index, so there's no need to specify index=True.
    email = TextField(primary_key=True)

    # Store the user's interests in a free-form text
    # field. Also create a secondary full-text search
    # index on this field.
    interests = TextField(
        fts=True,
        stemmer=True,
        min_word_length=3)

class Note(Model):
    __database__ = walrus_app
    __namespace__ = 'my-app'

    # A note is associated with a user. We will create a
    # secondary index on this field so we can efficiently
    # retrieve all notes created by a specific user.
    user_email = TextField(index=True)

    # Store the note content in a searchable text field. Use
    # the double-metaphone algorithm to index the content.
    content = TextField(
        fts=True,
        stemmer=True,
        metaphone=True)

    # Store the timestamp the note was created automatically.
    # Note that we do not call `now()`, but rather pass the
    # function itself.
    timestamp = DateTimeField(default=datetime.datetime.now)
__init__(index=False, primary_key=False, default=None)
Parameters:
  • index (bool) – Use this field as an index. Indexed fields will support Model.get() lookups.

  • primary_key (bool) – Use this field as the primary key.

get_indexes()

Return a list of secondary indexes to create for the field. For instance, a TextField might have a full-text search index, whereas an IntegerField would have a scalar index that supported range queries.

class walrus.TextField(fts=False, stemmer=True, metaphone=False, stopwords_file=None, min_word_length=None, *args, **kwargs)

Store unicode strings, encoded as UTF-8. TextField also supports full-text search through the optional fts parameter.

Note

If full-text search is enabled for the field, then the index argument is implied.

Parameters:
  • fts (bool) – Enable simple full-text search.

  • stemmer (bool) – Use porter stemmer to process words.

  • metaphone (bool) – Use the double metaphone algorithm to process words.

  • stopwords_file (str) – File containing stopwords, one per line. If not specified, the default stopwords will be used.

  • min_word_length (int) – Minimum length (inclusive) of word to be included in search index.

search(query[, default_conjunction='and'])
Parameters:
  • query (str) – Search query.

  • default_conjunction (str) – Either 'and' or 'or'.

Create an expression corresponding to the given search query. Search queries can contain conjunctions (AND and OR).

Example:

class Message(Model):
    database = my_db
    content = TextField(fts=True)

expression = Message.content.search('python AND (redis OR walrus)')
messages = Message.query(expression)
for message in messages:
    print(message.content)
get_indexes()

Return a list of secondary indexes to create for the field. For instance, a TextField might have a full-text search index, whereas an IntegerField would have a scalar index that supported range queries.

class walrus.IntegerField(index=False, primary_key=False, default=None)

Store integer values.

class walrus.AutoIncrementField(IntegerField)

Auto-incrementing primary key field.

class walrus.FloatField(index=False, primary_key=False, default=None)

Store floating point values.

class walrus.ByteField(index=False, primary_key=False, default=None)

Store arbitrary bytes.

class walrus.BooleanField(index=False, primary_key=False, default=None)

Store boolean values.

class walrus.UUIDField(**kwargs)

Store unique IDs. Can be used as primary key.

class walrus.DateTimeField(index=False, primary_key=False, default=None)

Store Python datetime objects.

class walrus.DateField(index=False, primary_key=False, default=None)

Store Python date objects.

class walrus.JSONField(index=False, primary_key=False, default=None)

Store arbitrary JSON data.

Container Field Types

class walrus.HashField(*args, **kwargs)

Store values in a Redis hash.

container_class

alias of Hash

class walrus.ListField(*args, **kwargs)

Store values in a Redis list.

container_class

alias of List

class walrus.SetField(*args, **kwargs)

Store values in a Redis set.

container_class

alias of Set

class walrus.ZSetField(*args, **kwargs)

Store values in a Redis sorted set.

container_class

alias of ZSet