Runtime Module

VIFF runtime. This is where the virtual ideal functionality is hiding! The runtime is responsible for sharing inputs, handling communication, and running the calculations.

Each player participating in the protocol will instantiate a Runtime object and use it for the calculations.

The Runtime returns Share objects for most operations, and these can be added, subtracted, and multiplied as normal thanks to overloaded arithmetic operators. The runtime will take care of scheduling things correctly behind the scenes.

class viff.runtime.Share(runtime, field, value=None)

A shared number.

The Runtime operates on shares, represented by this class. Shares are asynchronous in the sense that they promise to attain a value at some point in the future.

Shares overload the arithmetic operations so that x = a + b will create a new share x, which will eventually contain the sum of a and b. Each share is associated with a Runtime and the arithmetic operations simply call back to that runtime.

Inheritance diagram of Share

__init__(runtime, field, value=None)

Initialize a share.

If an initial value is given, it will be passed to callback() right away.

clone()

Clone a share.

Works like util.clone_deferred() except that it returns a new Share instead of a Deferred.

__add__(other)
__sub__(other)
__mul__(other)
__xor__(other)
__lt__(other)
__eq__(other)

Overloaded operators. They all call back to the Runtime used when the Share was constructed. The reverse-argument versions are defined too.

class viff.runtime.ShareList(shares, threshold=None)

Create a share that waits on a number of other shares.

Roughly modelled after the Twisted DeferredList class. The advantage of this class is that it is a Share (not just a Deferred) and that it can be made to trigger when a certain threshold of the shares are ready. This example shows how the pprint() callback is triggered when a and c are ready:

>>> from pprint import pprint
>>> from viff.field import GF256
>>> a = Share(None, GF256)
>>> b = Share(None, GF256)
>>> c = Share(None, GF256)
>>> shares = ShareList([a, b, c], threshold=2)
>>> shares.addCallback(pprint)           
<ShareList at 0x...>
>>> a.callback(10)
>>> c.callback(20)
[(True, 10), None, (True, 20)]

The pprint() function is called with a list of pairs. The first component of each pair is a boolean indicating if the callback or errback method was called on the corresponding Share, and the second component is the value given to the callback/errback.

If a threshold less than the full number of shares is used, some of the pairs may be missing and None is used instead. In the example above the b share arrived later than a and c, and so the list contains a None on its place.

Inheritance diagram of ShareList

class viff.runtime.ShareExchanger

Send and receive shares.

All players are connected by pair-wise connections and this Twisted protocol is one such connection. It is used to send and receive shares from one other player.

Inheritance diagram of ShareExchanger

incoming_data

Data from our peer is put here, either as an empty Deferred if we are waiting on input from the player, or the data itself if data is received from the other player before we are ready to use it.

loseConnection()

Disconnect this protocol instance.

sendData(program_counter, data_type, data)

Send data to the peer.

The program_counter is a tuple of unsigned integers, the data_type is an unsigned byte and data is a string.

The data is encoded as follows:

+---------+-----------+-----------+--------+--------------+
| pc_size | data_size | data_type |   pc   |     data     |
+---------+-----------+-----------+--------+--------------+
  2 bytes   2 bytes      1 byte     varies      varies

The program counter takes up 4 * pc_size bytes, the data takes up data_size bytes.

sendShare(program_counter, share)

Send a share.

The program counter and the share are converted to bytes and sent to the peer.

viff.runtime.preprocess(generator)

Track calls to this method.

The decorated method will be replaced with a proxy method which first tries to get the data needed from Runtime._pool, and if that fails it falls back to the original method. It also returns a flag to indicate whether the data is from the pool.

The generator method is only used to record where the data should be generated from, the method is not actually called. This must be the name of the method (a string) and not the method itself.

See also Preprocessing for more background information.

viff.runtime.create_runtime(id, players, threshold, options=None, runtime_class=None)

Create a Runtime and connect to the other players.

This function should be used in normal programs instead of instantiating the Runtime directly. This function makes sure that the Runtime is correctly connected to the other players.

The return value is a Deferred which will trigger when the runtime is ready. Add your protocol as a callback on this Deferred using code like this:

def protocol(runtime):
    a, b, c = runtime.shamir_share([1, 2, 3], Zp, input)

    a = runtime.open(a)
    b = runtime.open(b)
    c = runtime.open(c)

    dprint("Opened a: %s", a)
    dprint("Opened b: %s", b)
    dprint("Opened c: %s", c)

    runtime.wait_for(a,b,c)

pre_runtime = create_runtime(id, players, 1)
pre_runtime.addCallback(protocol)

This is the general template which VIFF programs should follow. Please see the example applications for more examples.

class viff.runtime.Runtime(player, threshold, options=None)

Basic VIFF runtime with no crypto.

This runtime contains only the most basic operations needed such as the program counter, the list of other players, etc.

id

Player ID. This is an integer in the range 1–n for n players.

threshold

Default threshold used by shamir_share(), open(), and others.

program_counter

Whenever a share is sent over the network, it must be uniquely identified so that the receiving player known what operation the share is a result of. This is done by associating a program counter with each operation.

Keeping the program counter synchronized between all players ought to be easy, but because of the asynchronous nature of network protocols, all players might not reach the same parts of the program at the same time.

Consider two players A and B who are both waiting on the variables a and b. Callbacks have been added to a and b, and the question is what program counter the callbacks should use when sending data out over the network.

Let A receive input for a and then for b a little later, and let B receive the inputs in reversed order so that the input for b arrives first. The goal is to keep the program counters synchronized so that program counter x refers to the same operation on all players. Because the inputs arrive in different order at different players, incrementing a simple global counter is not enough.

Instead, a tree is made, which follows the tree of execution. At the top level the program counter starts at [0]. At the next operation it becomes [1], and so on. If a callback is scheduled (see schedule_callback()) at program counter [x, y, z], any calls it makes will be numbered [x, y, z, 1], then [x, y, z, 2], and so on.

Maintaining such a tree of program counters ensures that different parts of the program execution never reuses the same program counter for different variables.

The schedule_callback() method is responsible for scheduling callbacks with the correct program counter.

See Program Counters for more background information.

abort(protocol, exc)

Abort the execution due to an exception.

The protocol received bad data which resulted in exc being raised when unpacking.

activate_reactor()

Activate the reactor to do actual communcation.

This is where the recursion happens.

activation_counter = None

Counter for calls of activate_reactor().

add(share_a, share_b)

Secure addition.

At least one of the arguments must be a Share, the other can be a FieldElement or a (possible long) Python integer.

deferred_queue = None

Queue of deferreds and data.

depth_counter = None

Record the recursion depth.

depth_limit = None

Recursion depth limit by experiment, including security margin.

fork_pc()

Fork the program counter.

handle_deferred_data(deferred, data)

Put deferred and data into the queue if the ViffReactor is running. Otherwise, just execute the callback.

id = None

ID of this player.

increment_pc()

Increment the program counter.

input(inputters, field, number=None)

Input number to the computation.

The players listed in inputters must provide an input number, everybody will receive a list with Share objects, one from each inputter. If only a single player is listed in inputters, then a Share is given back directly.

mul(share_a, share_b)

Secure multiplication.

At least one of the arguments must be a Share, the other can be a FieldElement or a (possible long) Python integer.

num_players = None

Number of known players.

Equal to len(self.players), but storing it here is more direct.

output(share, receivers=None)

Open share to receivers (defaults to all players).

Returns a Share to players with IDs in receivers and None to the remaining players.

players = None

Information on players.

Mapping from Player ID to Player objects.

preprocess(program)

Generate preprocess material.

The program specifies which methods to call and with which arguments. The generator methods called must adhere to the following interface:

  • They must return a list of Deferred instances.
  • Every Deferred must yield an item of pre-processed data. This can be value, a list or tuple of values, or a Deferred (which will be converted to a value by Twisted), but NOT a list of Deferreds. Use gatherResults() to avoid the latter.

The generate_triples() method is an example of a method fulfilling this interface.

print_transferred_data()

Print the amount of transferred data for all connections.

process_deferred_queue()

Execute the callbacks of the deferreds in the queue.

If this function is not called via activate_reactor(), also complex callbacks are executed.

process_queue(queue)

Execute the callbacks of the deferreds in queue.

protocols = None

Connections to the other players.

Mapping from from Player ID to ShareExchanger objects.

schedule_callback(deferred, func, *args, **kwargs)

Schedule a callback on a deferred with the correct program counter.

If a callback depends on the current program counter, then use this method to schedule it instead of simply calling addCallback directly. Simple callbacks that are independent of the program counter can still be added directly to the Deferred as usual.

Any extra arguments are passed to the callback as with addCallback().

schedule_complex_callback(deferred, func, *args, **kwargs)

Schedule a complex callback, i.e. a callback which blocks a long time.

Consider that the deferred is forked, i.e. if the callback returns something to be used afterwards, add further callbacks to the returned deferred.

shutdown()

Shutdown the runtime.

All connections are closed and the runtime cannot be used again after this has been called.

synchronize()

Introduce a synchronization point.

Returns a Deferred which will trigger if and when all other players have made their calls to synchronize(). By adding callbacks to the returned Deferred, one can divide a protocol execution into disjoint phases.

threshold = None

Shamir secret sharing threshold.

unfork_pc()

Leave a fork of the program counter.

using_viff_reactor = None

Use deferred queues only if the ViffReactor is running.

wait_for(*vars)

Make the runtime wait for the variables given.

The runtime is shut down when all variables are calculated.

Previous topic

Matrix Module

Next topic

Passive Secure Protocols

This Page