Breaking Encryption Keys
Parallel geometric attacks on neural key exchange protocol
1 Introduction
This exercise is on programming of analysis of secret key exchange protocol based on
artificial neural networks. No preliminary knowledge of key exchange protocols or
neural networks is required. All necessary concepts are explained in this section.
Secret key exchange over a public channel is an important problem in
cryptography. In 1976 Diffie and Hellman have suggested their famous protocol for
key exchange based on number theory. More recently (around 2002) I. Kanter, W.
Kinzel and E. Kanter proposed to use neural networks synchronization by mutual
learning as the mechanism to achieve secure key exchange. Unlike in Diffie-Hellman
protocol the common secret key is generated here by multiple message exchange
rounds at which every participant gets an approximation of the key.
It had turned out, however, that the Kanter-Kinzel-Kanter (KKK) protocol is not
secure, and three different attacks on this protocol had been demonstrated.
Later, it has been shown that resistance of the protocol against different
attacks can be improved by tuning the neural networks parameters. The neural
cryptography is still very active area, further development of which may produce
fast cryptographic protocols using simple arithmetic operations, parallelizable and
implementable in hardware.
2 The KKK protocol
2.1 KKK protocol
Each of two parties A and B in KKK protocol uses a two layer neural network.
The first layer consists of K perceptrons and the second layer computes the parity
of the hidden outputs of these K perceptrons. Each pereceptron has n inputs and
therefore the whole network accepts N = Kninput values, which are assumed to be
either 1 or -1. The output node of kth perceptron (1 ≤ k ≤ K) has a connection with
its ith input with the weight wk,i. All weights are integers from the set {-L,…,L} for
some L. When given n inputs xk,1,…xk,nthe kth perceptron generates at its output the
sign (+1 or -1) of the weighted sum of its inputs: . The output O of the whole network is then generated as the parity of the outputs of K perceptions:
In the KKK protocol with the fixed K, N, and L the both parties A and B start with
the neural networks of the same publicly known structure, but with random
uncorrelated weights, which assumed to be secret (i.e known only to A and B,
respectively) . At each round a new set of N random input values is chosen publicly
and each party announces publicly the result of evaluation of its own network on that
set of input values. If announced results are different (OA 6= OB) then both parties do
not change their networks and proceed to the next round. If the results coincide OA =
OB = O then both parties update the weights in those their perceptionswhich produced
the same results (ok = O) at their (hidden) inputs. The anti-Hebbian learning rule is used
to update the weights: wk,i:= wk,i- okxk,i. If after update the weight gets the value
outside of {-L,…L} this value is adjusted to the nearest bound (L or -L).
Figure 1: Parity machine
The remarkable property of the described protocol is that starting with random
weights both parties eventually (with probability 1) will arrive to the same weights in
their neural networks (networks are synchronized), which can be used then as the
common secret key. The detection of synchronization can be implemented by the
testing that both networks have produced the same outputs for S consecutive rounds,
with S some fixed in advance value.
The secrecy of the key relies on the (assumed) difficulty for an attacker to find out
that secret key even assuming that the attacker does know all random inputs and
outputs generated by both parties during the execution of the protocol. Notice that
attacker is in a different position as compared with parties of the protocol and can not
simply follow the protocol mimicking the moves. If the attacker E starts with the
network of the same structure as those of A and B and with uncorrelated random
weights then there is no problem for E to perfom steps of the protocol when OA = OB
= OE. But in all other cases E is forced to deviate from the protocol.
2.2 Geometric Attack
The attacker constructs aneural network C with the same structure as these of A and B and randomly initializesits weights. At each step she trains C with the same input as the two parties, andupdates its weights with the following rules:
• If A and B have different outputs OA 6= OB, then the attacker doesn’t update C
• If all A, B and C have the same outputs OA = OB = OC then the attacker update C
by the usual learning rule.
• If A and B have the same outputs OA = OB and QC 6= QA then the attacker finds i0
∈ {1,…K} that minimizes . The attacker negates and updates
C assuming the new hidden bits and output QA.
It is reportedthat such an attack can be successful for different variants ofthe KKK protocol and different parameters.
Furthermore there is an opportunity for parallelization:
different attackers starting from randomly chosing states behave independently andthus multiple attackers have a higher probability to be successful.
How to make attacks less successful?
it has been argued that increasing the range of weights in the neural networks,
that is a parameter L, would provide a defense against geometric attacks and
the argument has been extended to other types of attacks including genetic one. The
efficiency of such a defense is based on the fact that synchronization time grows
proportionally to L2, while success rate of the attacks drops exponentially.
Such a defense may be considered theoretically sufficient; however quadratically
increasing synchronization time may make the whole protocol not very practical.
3 Empirical investigations of the geometric and genetic attacks
These exercises ask you to write program implementing parallel geometric attacks on
KKK protocol using MPI/OpenMP and investigate how the success rate of the attack
depends on
• values of N,L,K;
• execution on single host, or a cluster (with reasonable resources requested);
You need to write a C program(s) which
• implements parallel geometric attack
Solution
Parallel geometric attacks on neural key exchange protocol
In this research, we have implemented Parallel geometric attack on key exchange protocol which has 2 layer neural networks. According to protocol, there are two side, let us say Network 1 and Network 2. Every network has unique architecture, it has fixed 2 layers, and it has just one output. But the how many inputs they have and the range of weights depends on our will. During key exchange protocol, both two network assign their weights random in the specified borders. In every iteration, these two network used the same random inputs (these are -1 or +1) and produce their output. Both side known other’s output. So both side try to adjust their weights according to hebbian learning algorithm to produce the same output. If both networks produce the same output in for instance 50 successive iteration, we can called those networks as synchronized.
During these two networks synchronization process, we assume that one attacker listen the network and know the inputs and both networks output in somehow. We assume that the attackers also know the network architecture as well. Out attackers also tries to get synchronize both network. But we have handicap that the target is not fixed. It always change. So according to geometric attacks algorithm we updated weight in some special cases. For instance, in any iteration if both networks output are different, then attackers do not update their own weights. Attacker update their own weight if both networks output and attackers own in any iteration are the same. In last case, if network1 and network2’s output are same but not attacker’s output, then we needs to select the perceptron which minimize the difference. So we need to update just selected perceptron weight not all weights.
We implement our code according to MPI library. It means our attackers can run on different process if you want. In this way, the researchers claim that due to do diversity of random initial weight, we have an opportunity to increase the possibility to get synchronized network.
All perceptron in our network has to have the same number of input. It means general number of inputs should be divisible by number of perceptron. If not then we increase the number of input to get tose number divisible.
Here are our tests results in some certain test group.
Test-1-
Output of the program:
Number of inputs : 20
Total numbers of perceptrons: 6
Note, due to the input per perceptron should be integer, adjusted number of input is : 24
Positive boundaries of the weights: 4
Number of attackers: 4
Initial state of networks
First Network
[4, -3, -2, 4]
[-1, -3, 3, -1]
[2, -2, -2, 3]
[-4, -1, -4, -1]
[-2, 2, -1, 3]
[-3, 4, 2, 3]
Second Network
[-4, 1, -4, 1]
[-3, 2, -3, 0]
[4, -3, 3, -3]
[4, -2, 1, -3]
[-2, -3, -1, -2]
[-4, 4, 3, 0]
Output 1: -1
Output 2: 1
Final State of Network 1 and 2
First Network
[-1, -2, 1, -1]
[0, -2, 0, 1]
[1, 1, -1, -2]
[0, 2, -1, 0]
[1, 1, 2, 1]
[-1, 2, 1, -1]
Second Network
[-1, -2, 1, -1]
[-1, -2, 0, 2]
[1, 0, 0, -2]
[0, 2, -2, 1]
[1, 0, 2, 0]
[-2, 2, 1, 0]
Output 1: 1
Output 2: -1
Network1 and Network2 are NOT SYNCHRONISED !!!!
Total iter: 1000000
— List of syncronised attackers ——–
None.
Test-2-
Number of inputs : 4
Total numbers of perceptrons: 6
Note, due to the input per perceptron should be integer, adjusted number of input is : 6
Positive boundaries of the weights: 4
Number of attackers: 5
Initial state of networks
First Network
[-4]
[-1]
[1]
[3]
[3]
[3]
Second Network
[-3]
[4]
[-1]
[-4]
[3]
[0]
Output 1: -1
Output 2: 1
Final State of Network 1 and 2
First Network
[0]
[0]
[0]
[2]
[0]
[0]
Second Network
[0]
[0]
[0]
[0]
[0]
[0]
Output 1: -1
Output 2: 1
Network1 and Network2 are NOT SYNCHRONISED !!!!
Total iter: 1000000
— List of syncronised attackers ——— Attacker 0 Synchronised with Network 1 for ‘999981’ iter
– Attacker 3 Synchronised with Network 1 for ‘999991’ iter
Test-3
Network1 and Network2 are NOT SYNCHRONISED !!!!
Total iter: 1000000
— List of syncronised attackers ——— Attacker 1 Synchronised with Networ
k 1 for ‘999956’ iter
– Attacker 2 Synchronised with Network 1 for ‘999953’ iter
– Attacker 6 Synchronised with Network 1 for ‘999954’ iter
– Attacker 7 Synchronised with Network 1 for ‘999956’ iter
– Attacker 10 Synchronised with Network 1 for ‘999944’ iter
– Attacker 11 Synchronised with Network 1 for ‘999957’ iter
– Attacker 12 Synchronised with Network 1 for ‘999957’ iter
– Attacker 14 Synchronised with Network 1 for ‘959705’ iter
– Attacker 16 Synchronised with Network 1 for ‘999936’ iter
– Attacker 17 Synchronised with Network 1 for ‘959705’ iter
– Attacker 18 Synchronised with Network 1 for ‘999956’ iter
– Attacker 19 Synchronised with Network 1 for ‘999957’ iter
– Attacker 23 Synchronised with Network 1 for ‘999956’ iter
– Attacker 24 Synchronised with Network 1 for ‘999955’ iter
– Attacker 26 Synchronised with Network 1 for ‘999940’ iter
– Attacker 28 Synchronised with Network 1 for ‘999944’ iter
– Attacker 29 Synchronised with Network 1 for ‘999958’ iter
– Attacker 31 Synchronised with Network 1 for ‘999944’ iter
– Attacker 33 Synchronised with Network 1 for ‘999954’ iter
– Attacker 34 Synchronised with Network 1 for ‘999956’ iter
– Attacker 36 Synchronised with Network 1 for ‘999956’ iter
– Attacker 41 Synchronised with Network 1 for ‘999944’ iter
– Attacker 42 Synchronised with Network 1 for ‘999954’ iter
– Attacker 44 Synchronised with Network 1 for ‘999957’ iter
– Attacker 45 Synchronised with Network 1 for ‘999940’ iter
– Attacker 48 Synchronised with Network 1 for ‘999944’ iter
– Attacker 49 Synchronised with Network 1 for ‘999954’ iter
– Attacker 52 Synchronised with Network 1 for ‘999956’ iter
– Attacker 53 Synchronised with Network 1 for ‘999954’ iter
– Attacker 56 Synchronised with Network 1 for ‘999949’ iter
– Attacker 57 Synchronised with Network 1 for ‘999957’ iter
– Attacker 60 Synchronised with Network 1 for ‘999949’ iter
– Attacker 64 Synchronised with Network 1 for ‘999959’ iter
– Attacker 65 Synchronised with Network 1 for ‘999956’ iter
– Attacker 66 Synchronised with Network 1 for ‘999949’ iter
– Attacker 67 Synchronised with Network 1 for ‘999956’ iter
– Attacker 68 Synchronised with Network 1 for ‘999955’ iter
– Attacker 70 Synchronised with Network 1 for ‘999955’ iter
– Attacker 80 Synchronised with Network 1 for ‘999939’ iter
– Attacker 85 Synchronised with Network 1 for ‘999956’ iter
– Attacker 86 Synchronised with Network 1 for ‘999957’ iter
– Attacker 88 Synchronised with Network 1 for ‘999949’ iter
– Attacker 89 Synchronised with Network 1 for ‘999956’ iter
– Attacker 91 Synchronised with Network 1 for ‘999960’ iter
– Attacker 94 Synchronised with Network 1 for ‘999956’ iter
– Attacker 97 Synchronised with Network 1 for ‘999944’ iter
– Attacker 98 Synchronised with Network 1 for ‘999956’ iter
– Attacker 100 Synchronised with Network 1 for ‘999956’ iter
– Attacker 101 Synchronised with Network 1 for ‘999959’ iter
– Attacker 105 Synchronised with Network 1 for ‘999958’ iter
– Attacker 109 Synchronised with Network 1 for ‘999936’ iter
– Attacker 111 Synchronised with Network 1 for ‘999954’ iter
– Attacker 112 Synchronised with Network 1 for ‘999949’ iter
– Attacker 114 Synchronised with Network 1 for ‘999955’ iter
– Attacker 115 Synchronised with Network 1 for ‘999954’ iter
– Attacker 117 Synchronised with Network 1 for ‘999949’ iter
– Attacker 119 Synchronised with Network 1 for ‘999940’ iter
– Attacker 120 Synchronised with Network 1 for ‘999949’ iter
– Attacker 121 Synchronised with Network 1 for ‘959705’ iter
– Attacker 123 Synchronised with Network 1 for ‘999956’ iter
– Attacker 125 Synchronised with Network 1 for ‘999956’ iter
– Attacker 126 Synchronised with Network 1 for ‘999957’ iter
– Attacker 127 Synchronised with Network 1 for ‘999940’ iter
– Attacker 128 Synchronised with Network 1 for ‘999954’ iter
– Attacker 129 Synchronised with Network 1 for ‘999956’ iter
Test-4
Number of inputs : 40
Total numbers of perceptrons: 4
Note, due to the input per perceptron should be integer, adjusted number of inpu
t is : 40
Positive boundaries of the weights: 4
Number of attackers: 4
Initial state of networks
First Network
[3, 1, -3, 0, 4, 2, -4, 2, -3, -4]
[2, 2, -4, -3, -4, 2, 2, 4, 1, -4]
[0, 0, 2, 4, -2, 3, 4, -4, -3, -1]
[-2, -3, 4, -4, -4, -1, -4, 0, 2, 2]
Second Network
[2, 0, 3, 0, -3, -1, 3, -4, -1, -2]
[3, 3, 1, 1, 1, 2, -4, 4, 1, 4]
[1, -2, -2, 4, -3, -4, 4, 4, -1, -4]
[-4, -4, -4, -4, 3, 0, 3, -2, 0, 4]
Output 1: -1
Output 2: -1
Final State of Network 1 and 2
First Network
[0, 0, -1, 1, -1, -1, 3, 3, 0, -2]
[1, 0, 2, 1, -1, -1, -2, -3, 3, 2]
[-2, 1, -1, 0, -2, -3, 2, 1, -1, 1]
[1, 0, 3, 1, -1, 2, 2, 1, -2, 0]
Second Network
[0, 0, -1, 1, -1, -1, 3, 3, 0, -2]
[1, 0, 2, 1, -1, -1, -2, -3, 3, 2]
[-2, 1, -1, 0, -2, -3, 2, 1, -1, 1]
[1, 0, 3, 1, -1, 2, 2, 1, -2, 0]
Output 1: 1
Output 2: 1
Network1 and Network2 are SYNCHRONISED !!
Synchronisediter: 50
Total iter: 113921
Time Taken: 449.000 ms
— List of syncronised attackers ——–
None
Source.c
// define necessary libraries
#include
#include
#include
#include
#include
// function for printing out network 1 and 2
voidprintNetworks(intnp, int nip, int *W1, int *W2, int o1, int o2)
{
printf(“First Network \n”);
for (int i = 0; i
{
// Prints network 1 weights
printf(“[“);
for (int j = 0; j < nip; j++)
{
printf(“%d”, W1[i*nip+j]);
if (j < nip-1)
printf(“, “);
}
printf(“]\n”);
}
printf(“\nSecond Network \n”);
for (int i = 0; i
{
// Print network 2 weights
printf(“[“);
for (int j = 0; j < nip; j++)
{
printf(“%d”, W2[i*nip+j]);
if (j < nip-1)
printf(“, “);
}
printf(“]\n”);
}
// network out
printf(“\nOutput 1: %d”, o1);
printf(“\nOutput 2: %d\n”, o2);
}
// function for generate random -1 +1 into N length array
voidgenerateRandomInput( int *x,int N)
{
int i=0;
for (i = 0; i < N; i++)
{
x[i] = 1;
if (rand() % 2 == 0)
x[i] = -1;
}
}
// change network weight according to learning algorithm
voidupdateWeights(intnp, intbnd, int nip, intnout, int *pout, int *W, int *inp)
{
for (int i = 0; i
{
if (pout[i] == nout)
{
for (int j = 0; j < nip; j++) // loop for each inputs into concerned perceptron
{ // calculate desired weight
int w = (W[i*nip+j] – (pout[i] * inp[(i*nip)+j]));
// if the calculated weight is out of border than set it as border
if (w < -bnd)
{
w = -bnd;
}
else if (w >bnd)
{
w = bnd;
}
// set calculated weight
W[i*nip+j] = w;
}
}
}
}
// Method generates outputs for each perceptron in the network and then overall network output
intcalculateOutput(intnp, int nip, int *pout, int *W, int *inp)
{
intnout = 1;
// Outer loop handles num of perceptrons
for (int i = 0; i
{
// set initial output of perceptron
pout[i] = 0;
// calculate sum of weighted inputs
for (int j = 0; j < nip; j++) // loop for each inputs into concerned perceptron
{
// Getting perceptron output
pout[i] = pout[i] + (W[(i*nip)+j] * inp[(i*nip)+j]);
}
// saturate calculated output as -1 or 1 according to threshold 0.
if (pout[i] <= 0)
pout[i] = -1;
else
pout[i] = 1;
// calculate cumulative multiplication of every perceptron output in the network
nout = nout * pout[i];
}
returnnout;
}
// function for generate in range of [-b +b] random weight for n length array
voidsetRandomWeight(int *W,intn,int b)
{
int i=0;
for (i=0;i
*(W+i)=(rand() % (2 * b + 1) – b);
}
// take necessary inputs from user
voidtakeInputs(int *nInput, int *nPerceptron,int *boundWeight, int *nAttacker,int *nInputPerc)
{
printf(“Number of inputs : “);
fflush(stdout);
scanf(“%d”, nInput);
printf(“Total numbers of perceptrons: “);
fflush(stdout);
scanf(“%d”, nPerceptron);
if (*nInput % *nPerceptron !=0)
*nInput= *nInput + *nPerceptron – (*nInput % *nPerceptron);
*nInputPerc= *nInput/ *nPerceptron;
printf(“Note, due to the input per perceptron should be integer, adjusted number of input is : %d\n”,*nInput);
printf(“Positive boundaries of the weights: “);
fflush(stdout);
scanf(“%d”, boundWeight);
printf(“Number of attackers: “);
fflush(stdout);
scanf(“%d”, nAttacker);
}
int main(void)
{
// initialize comm size and each process id for MPI
intcomm_size;
intproc_id;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &comm_size);
MPI_Comm_rank(MPI_COMM_WORLD, &proc_id);
// define necessary params these are number of inputs, positive boundary of weights
// number of perceptron, number of inputs per perseptron and number of attackers and number of attacker per process
intnInput, boundWeight, nPerceptron,nInputPerc, nAttacker,nAttackerProcess,sMax=50,iterMax=1000000;
// time variable to measure the time costs
clock_t begin, end;
doubletime_taken;
// seed number of random generator depends on time to take diiferentrandm number for each time
// but also depends on process d to take differen random number for each process
srand (time(NULL)+proc_id);
// take inputs from user if it is 0th process
if (proc_id == 0)
{
takeInputs(&nInput,&nPerceptron,&boundWeight, &nAttacker,&nInputPerc);
}
// Right now just 0th process know the given params.
//So we need to broadcast the given params to the other processes
MPI_Bcast(&nInput, 1, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Bcast(&nPerceptron, 1, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Bcast(&boundWeight, 1, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Bcast(&nAttacker, 1, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Bcast(&nInputPerc, 1, MPI_INT, 0, MPI_COMM_WORLD);
// every processes need to calculate their attacker size
// if given number of attacker is divisible by number of process no problem
// but unless we need to spread the remains to the first n process
nAttackerProcess=nAttacker/comm_size;
if (proc_id
nAttackerProcess++;
// create first and second layer weight matrix for attacked network, it has nPerceptron rows and nInputPerc column
int *W1 = malloc(nPerceptron*nInputPerc*sizeof(int));
int *W2 = malloc(nPerceptron*nInputPerc*sizeof(int));
// set initial weight by random in range of [-boundWeight +boundWeight]
setRandomWeight(W1,nPerceptron*nInputPerc,boundWeight);
setRandomWeight(W2,nPerceptron*nInputPerc,boundWeight);
// create attacker network weight matrix, it has 3 dimension nAttackerProcess,nPerceptron and nInputPerc column
int *aW = malloc(nAttackerProcess*nPerceptron*nInputPerc*sizeof(int));
setRandomWeight(aW,nAttackerProcess*nPerceptron*nInputPerc,boundWeight);
// Synchronisationiter, total iter and input array
intiter = 0;
int S = 0;
// how many times attacker synchorised
int* attackS = calloc(nAttackerProcess, sizeof(int));
int* inputsArray = malloc(nInput*sizeof(int));
// network 1 and 2 outputs
int net1out = 0;
int net2out = 0;
// perceptron outputs for network 1 and network 2
int* pOutputs1 = malloc(nPerceptron*sizeof(int));
int* pOutputs2 = malloc(nPerceptron*sizeof(int));
// attackers’ perceptron outputs and network outputs
// there is number of attacker network output
int *OutputsAttackers = malloc(nAttackerProcess*sizeof(int));
// there is number of attacker times number of perceptron, perceptron output
int **pOutputsAttackers = malloc(nAttackerProcess*sizeof(int*));
for (int i = 0; i
pOutputsAttackers[i] = malloc(nPerceptron*sizeof(int));
// Store the starting clock time
if (proc_id == 0)
begin = clock();
while ((S
{
if (proc_id == 0)
{
// Generate inputs as -1 or +1
generateRandomInput(inputsArray,nInput);
// calculate network output
net1out = calculateOutput(nPerceptron, nInputPerc, pOutputs1, W1, inputsArray);
net2out = calculateOutput(nPerceptron, nInputPerc, pOutputs2, W2, inputsArray);
}
// Print initial network weights and outputs
if (proc_id == 0 &&iter == 0)
{
printf(“Initial state of networks\n”);
printNetworks(nPerceptron, nInputPerc, W1, W2, net1out, net2out);
}
// Broadcasting the inputsArray to all other processes
MPI_Bcast(&(inputsArray[0]), nInput, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Bcast(&net1out, 1, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Bcast(&net2out, 1, MPI_INT, 0, MPI_COMM_WORLD);
// calculate Output for each attacker network and also its perceptrons output
for (int i = 0; i
{
OutputsAttackers[i] = calculateOutput(nPerceptron, nInputPerc, pOutputsAttackers[i], &aW[i*nPerceptron*nInputPerc], inputsArray);
}
if (proc_id == 0)
{
// Check network outputs and apply learning rule if neccessary
if (net1out == net2out)
{
// update 1 and 2nd network weights
updateWeights(nPerceptron, boundWeight, nInputPerc, net1out, pOutputs1, W1, inputsArray);
updateWeights(nPerceptron, boundWeight, nInputPerc, net2out, pOutputs2, W2, inputsArray);
// count how many times both networks output is the same consecutively
S++;
}
else
S=0;
}
// ATTACKER: Check outputs for each attacker and apply learning rule if equal
for (int i = 0; i
{
// if attackers output is the same with network 1
if (OutputsAttackers[i] == net1out)
{ // then both net1 and net 2 is the same apply weight cahnge
if (net1out == net2out)
{
updateWeights(nPerceptron, boundWeight, nInputPerc, OutputsAttackers[i], pOutputsAttackers[i], &aW[i*nPerceptron*nInputPerc], inputsArray);
}
// count how many times attacker output and net1 is the same consecutively
attackS[i] = attackS[i] + 1;
}
else
{ // attacker is not succeed so go to learning section
if (net1out == net2out)
{
int w = 0;
intwMin = 0;
intselectedPercp = 0;
for (int j = 0; j
{
for (int k = 0; k
{
// Getting absolute minimum perceptron weight * input
w = abs(aW[i*nPerceptron*nInputPerc+ j*nInputPerc+ k] * inputsArray[(j*nInputPerc)+k]);
if ((w
{
wMin = w;
selectedPercp = j;
}
}
}
// Flipping the sign and assuming output of A
pOutputsAttackers[i][selectedPercp] = pOutputsAttackers[i][selectedPercp] * -1;
OutputsAttackers[i] = net1out;
// Applying learning rule
updateWeights(nPerceptron, boundWeight, nInputPerc, OutputsAttackers[i], pOutputsAttackers[i], &aW[i*nPerceptron*nInputPerc], inputsArray);
}
attackS[i] = 0;
}
}
iter++;
// Broadcast ‘S’ so other processes know when to break out of loop
MPI_Bcast(&S, 1, MPI_INT, 0, MPI_COMM_WORLD);
}
if (proc_id == 0)
{
// Print final state of networks and see how they are same
printf(“\nFinal State of Network 1 and 2\n\n”);
printNetworks(nPerceptron, nInputPerc, W1, W2, net1out, net2out);
// if network 1 and network 2 are synchronised
if (S == sMax)
{
printf(“\n Network1 and Network2 are SYNCHRONISED !!\n”);
printf(“Synchronisediter: %d\n”, S);
printf(“Total iter: %d”, iter);
// Take ending clock time and print time taken
end = clock();
time_taken = ((double)(end – begin) / CLOCKS_PER_SEC)*1000;
printf(“\nTime Taken: %.3f ms\n”, time_taken);
}
else
{
printf(“\n Network1 and Network2 are NOT SYNCHRONISED !!!!\n”);
printf(“Total iter: %d\n”, iter);
}
}
// wait other process
MPI_Barrier(MPI_COMM_WORLD);
// in every process atackred index starts 0 .but in general index every process should hve their own offset value
// 0.process’ ofset is 0, 1st process’s ofset is number of process 0, 2nd process’s ofset is sum of number of process 0 and 1. so on so forth.
int offset = 0;
for (int i=0;i<=proc_id;i++)
{
inttmp=0;
for (int j=0;j
{
tmp=nAttacker/comm_size;
if (j
tmp++;
}
offset+=tmp;
}
// Print the id number of synchrorised attack network with network1.
printf(“— List of syncronised attackers ——–\n”);
for (int i = 0; i
{
if (attackS[i] >= sMax)
{
printf(“- Attacker %d Synchronised with Network 1 for ‘%d’ iter \n”, i+offset, attackS[i]);
//localSynced++;
}
}
// Releasing memory
if (proc_id == 0)
{
free(W1);
free(W2);
}
free(inputsArray);
free(pOutputs1);
free(pOutputs2);
free(aW);
free(attackS);
MPI_Finalize();
return 0;
}