Fattah Regression 2023
Fattah Regression 2023
Fattah Regression 2023
Loss Function
850 84 100 84
933 87
1001 81
Answer: 84
1111 88 50
1230 92
1321 101
Inference
1370 102 0
1390 85 0 500 1000 1500 2000
1422 95
1480 120 Birthweight (g) Can be done after
1487 114
1490 100 the model (i.e., the line
in the figure) is trained
Trainig Dataset Can be used to train the model
850
933
84
87
100 b = 29.21
1001 81
1111
1230
88
92
50 But, how can we find
1321 101
0
(or learn) m and b?
1370 102
1390 85 0 500 1000 1500 2000
1422 95
1480 120 Birthweight (g)
1487 114
1490 100
Dataset
𝒚′ = 𝒉𝜽 (𝒙) 𝑱
𝑱 𝜽𝟏
𝞱1
𝑱 𝜽𝟏
𝒚′ = 𝜽𝟏 𝒙
= 𝟑𝒙
𝒙
𝞱1 = 3 𝞱1
𝒚′ = 𝒉𝜽 (𝒙) 𝑱
𝑱 𝜽𝟏
𝒚′ = 𝜽𝟏 𝒙
= 𝟐𝒙
𝒙
𝞱1
𝞱1 = 2
𝒚′ = 𝒉𝜽 (𝒙) 𝑱
𝑱 𝜽𝟏
𝞱1
𝞱1 = 1
▸Outline:
- Have some cost function 𝑱 𝜽𝟎 , … , 𝜽𝒏−𝟏
- Start off with some guesses for 𝜃0 , … , 𝜃𝑛−1
• It does not really matter what values you start off with, but a
common choice is to set them all initially to zero
- Repeat until convergence{
Partial Derivative
𝜕 𝑱 𝜽𝟎 , … , 𝜽𝒏−𝟏
𝜃𝑗 = 𝜃𝑗 − α
𝜕 𝜃𝑗
}
Learing Rate
Distance
100m
What is the average speed of Usain Bolt?
= Change in Distance/Change in Time
Δy = Δy/Δx
= 100/9.58
Δx = 10.43m/s
9.58s time
100m
But, this average speed is different than
instantenous speed!
Δy
Bolt will not instantly go 100m in 9.58s, but
Δx
rather start off a little slower, then accelerate,
then decelerate a little towards the end
9.58s time
Distance
This way, Δy/Δx != Δy/Δx (this is opposite to
100m
having a line whereby it does not matter which
Δy
Δx
two points to take on it since the slope will be
always the same)
Δy
Consequently, at any given moment in time, a
Δx
9.58s time slope on the green function (e.g., Δy/Δx or
Δy/Δx) will be different than the average slope
on the blue line (i.e., Δy/Δx)
Distance
100m
This instantaneous slope is what
mathematicians denote as the derivative
and write as:
And, this is an infinitely small
change in x (d stands for
∆𝑦
lim = dy/dx differential)
∆𝑥→0 ∆𝑥
9.58s time
▸Outline:
- Have some cost function 𝑱 𝜽𝟎 , … , 𝜽𝒏−𝟏
- Start off with some guesses for 𝜃0 , … , 𝜃𝑛−1
• It does not really matter what values you start off with, but a
common choice is to set them all initially to zero
- Keep changing 𝜃0 , … , 𝜃𝑛−1 to reduce 𝑱 𝜽𝟎 , … , 𝜽𝒏−𝟏 until we
hopefully end up at a minimum location
• When you are at a certain position on the surface of 𝑱, look around,
then take a little step in the direction of the steepest descent, then
repeat
▸Outline:
- Have some cost function 𝑱 𝜽𝟎 , … , 𝜽𝒏−𝟏
- Start off with some guesses for 𝜃0 , … , 𝜃𝑛−1
• It does not really matter what values you start off with, but a
common choice is to set them all initially to zero
- Repeat until convergence{
Partial Derivative
𝜕 𝑱 𝜽𝟎 , … , 𝜽𝒏−𝟏
𝜃𝑗 = 𝜃𝑗 − α What do α
𝜕 𝜃𝑗 and 𝜕 do?
}
Learing Rate
𝑱 𝜽𝟏
𝒚′ = 𝜽𝟏 𝒙
= 𝟏𝒙
𝒙
𝞱1
𝞱1 = 1
𝒉𝜽 (𝒙) is the Hypothesis Function 𝑱 𝜽𝟏 is the Cost Function
𝞱1
𝞱1
𝞱1
New 𝞱1 Old 𝞱1
𝞱1
𝞱1
𝞱1
Old 𝞱1 New 𝞱1
𝞱1
Derivative = 0
𝞱1
𝞱1
𝞱1
▸Outline:
- Have some cost function 𝑱 𝜽𝟎 , … , 𝜽𝒏−𝟏
- Start off with some guesses for 𝜃0 , … , 𝜃𝑛−1
• It does not really matter what values you start off with, but a
common choice is to set them all initially to zero
- Repeat until convergence{ Now we
understand the
Partial derivative
intuition behind
𝜕 𝑱 𝜽𝟎 , … , 𝜽𝒏−𝟏
𝜃𝑗 = 𝜃𝑗 − α gradient descent
𝜕 𝜃𝑗 and how 𝜶 and 𝝏
} act together to
Learing rate, which controls how big a step we take make gradient
when we update 𝜃 descent work!
𝑗
𝜃1 = 𝑡𝑒𝑚𝑝1
Here 𝜽 is a vector that holds all the parameters, that is, 𝜽 = [𝜽𝟎, 𝜽𝟏, …, 𝜽𝒎] and 𝒙
is a vector that encompasses all the features, that is, 𝒙 = [𝒙𝟎, 𝒙𝟏, …, 𝒙𝒎] (𝒙𝟎 shall
be always equal to 1)
𝟑. 𝟐
Spam Filter 𝟒. 𝟏
𝒙 = [𝟏, 𝟎, 𝟎, 𝟏, 𝟎] 𝜽𝑻 𝒙 𝒉𝜽 𝒙 = 𝟐. 𝟗 [𝟏, 𝟎, 𝟎, 𝟏, 𝟎]
𝟔. 𝟕
Representing an email 𝟏. 𝟏
Assume 𝞱 = [𝟑. 𝟐, 𝟒. 𝟏, 𝟐. 𝟗, 𝟔. 𝟕, 𝟏. 𝟏]
𝑧
0 8
𝑧
-8 0
𝒊𝒇 𝒉𝜽 𝒙 < 𝟎. 𝟓 𝐩𝐫𝐞𝐝𝐢𝐜𝐭 𝟎
𝒊𝒇 𝒉𝜽 𝒙 ≥ 𝟎. 𝟓 𝐩𝐫𝐞𝐝𝐢𝐜𝐭 𝟏
− log 𝒉𝜽 𝒙 𝒊𝒇 𝒚 = 𝟏
Cost(𝒉𝜽 𝒙 , y) =
− log 𝟏 − 𝒉𝜽 𝒙 𝒊𝒇 𝒚 = 𝟎
Equivalent To
This function still assumes real-valued outputs for 𝒉𝜽 𝒙 (i.e., still entails a
regression problem), while logistic regression should predict discrete
values (i.e., logistic regression is a classification problem)
1 1
−𝑙𝑜𝑔 𝑇 −𝑙𝑜𝑔 1 − 𝑇
1 + 𝑒 −𝜽 𝑥 1 + 𝑒 −𝜽 𝑥
𝜽𝑻 𝒙 𝜽𝑻 𝒙
≣
𝑛
1 𝑖
1 1 Cost function
minimize −𝑦 log − (1 − 𝑦) log 1 −
𝜃 𝑛 1 + 𝑒 −𝜽
𝑇
𝑥𝑖 1 + 𝑒 −𝜽
𝑇
𝑥𝑖 𝑱(𝞱)
𝑖=1
▸Outline:
- Have cost function 𝑱 𝞱 , where 𝞱 = [𝜽𝟎 , … , 𝜽𝒎 ]
- Start off with some guesses for 𝜃0 , … , 𝜃𝑚
• It does not really matter what values you start off with, but a
common choice is to set them all initially to zero
- Repeat until convergence{
Partial derivative
𝜕𝑱 𝞱
𝜃𝑗 = 𝜃𝑗 − α
𝜕 𝜃𝑗 Note: Update all 𝜽𝒋 simulatenously
}
Learing rate, which controls how big a step we take
when we update 𝜃𝑗
▸Outline:
- Have cost function 𝑱 𝞱 , where 𝞱 = [𝜽𝟎 , … , 𝜽𝒎 ]
- Start off with some guesses for 𝜃0 , … , 𝜃𝑚
• It does not really matter what values you start off with, but a
common choice is to set them all initially to zero
- Repeat until convergence{
𝑛 The final formula
1 𝑖 after applying
𝜃𝑗 = 𝜃𝑗 − α −𝑦 𝑥𝑗 (𝑖)
1 + 𝑒 −𝜽
𝑇
𝑥𝑖 partial derivatives
𝑖=1
}
1
𝒊𝒇 𝒉𝜽 𝒙 = 𝑇 < 𝟎. 𝟓 𝐩𝐫𝐞𝐝𝐢𝐜𝐭 𝟎
1 + 𝑒 −𝜽 𝒙
1
𝑬𝒍𝒔𝒆 𝒊𝒇 𝒉𝜽 𝒙 = 𝑇 ≥ 𝟎. 𝟓 𝐩𝐫𝐞𝐝𝐢𝐜𝐭 𝟏
1 + 𝑒 −𝜽 𝒙
A Training Dataset
1 entails that a word (i.e., “and”) is present in an email (i.e., “Email a”)
0 entails that a word (i.e., “and”) is abscent in an email (i.e., “Email b”)
▸Outline:
- Have cost function 𝑱 𝞱 , where 𝞱 = [𝜽𝟎 , … , 𝜽𝒎 ]
- Start off with some guesses for 𝜃0 , … , 𝜃𝑚
• It does not really matter what values you start off with, but a
common choice is to set them all initially to zero
- Repeat until convergence{
𝑛
1 𝑖
𝜃𝑗 = 𝜃𝑗 − α −𝑦 𝑥𝑗 (𝑖)
1 + 𝑒 −𝜽
𝑇
𝑥𝑖
𝑖=1 First, let us calculate this factor
} for every example in our
training dataset
▸Outline:
- Have cost function 𝑱 𝞱 , where 𝞱 = [𝜽𝟎 , … , 𝜽𝒎 ]
- Start off with some guesses for 𝜃0 , … , 𝜃𝑚
• It does not really matter what values you start off with, but a
common choice is to set them all initially to zero
- Repeat until convergence{ Second, let us calculate
this equation for every
𝑛 example in our training
1 𝑖
dataset and for every 𝜽𝒋 ,
𝜃𝑗 = 𝜃𝑗 − α −𝑦 𝑥𝑗 (𝑖) where j is between 0
1 + 𝑒 −𝜽
𝑇
𝑥𝑖
𝑖=1 and m
}
[1,0,0,1,1,0] 0 [0,0,0,0,0,0]×[1,0,0,1,1,0]= 1
0 (1+1 − 𝟎) × 𝟏 = 0.5
[1,0,1,1,0,0] 1 [0,0,0,0,0,0]×[1,0,1,1,0,0]= 1
0 (1+1 − 𝟏) × 𝟏 = -0.5
[1,1,0,0,1,0] 0 [0,0,0,0,0,0]×[1,1,0,0,1,0]= 1
0 (1+1 − 𝟎) × 𝟏 = 0.5
[1,1,0,1,0,1] 1 [0,0,0,0,0,0]×[1,1,0,1,0,1]= 1
0 (1+1 − 𝟏) × 𝟏 = -0.5
[1,1,0,1,1,0] 0 [0,0,0,0,0,0]×[1,1,0,1,1,0]= 1
0 (1+1 − 𝟎) × 𝟏 = 0.5
[1,0,0,1,1,0] 0 [0,0,0,0,0,0]×[1,0,0,1,1,0]= 1
0 (1+1 − 𝟎) × 𝟏 = 0.5
[1,0,1,1,0,0] 1 [0,0,0,0,0,0]×[1,0,1,1,0,0]= 1
0 (1+1 − 𝟏) × 𝟏 = -0.5
[1,1,0,0,1,0] 0 [0,0,0,0,0,0]×[1,1,0,0,1,0]= 1
0 (1+1 − 𝟎) × 𝟏 = 0.5
[1,1,0,1,0,1] 1 [0,0,0,0,0,0]×[1,1,0,1,0,1]= 1
0 (1+1 − 𝟏) × 𝟏 = -0.5
[1,1,0,1,1,0] 0 [0,0,0,0,0,0]×[1,1,0,1,1,0]= 1
0 (1+1 − 𝟎) × 𝟏 = 0.5
▸Outline:
- Have cost function 𝑱 𝞱 , where 𝞱 = [𝜽𝟎 , … , 𝜽𝒎 ]
- Start off with some guesses for 𝜃0 , … , 𝜃𝑚
• It does not really matter what values you start off with, but a
common choice is to set them all initially to zero
- Repeat until convergence{
𝑛
1 𝑖 (𝑖)
Third, let us compute
𝜃𝑗 = 𝜃𝑗 − α 𝑇 𝑖
−𝑦 𝑥𝑗 every 𝜽𝒋
𝑖=1
1+ 𝑒 −𝜽 𝑥
}
𝑖=1
1 + 𝑒 −𝜽 𝑥
[1,1,1,0,1,1] 1 [0,0,0,0,0,0]×[1,1,1,0,1,1]=
-0.5
0
𝑻𝒉𝒆𝒏,
[1,0,0,1,1,0] 0 [0,0,0,0,0,0]×[1,0,0,1,1,0]=
0.5
0 𝜃0 = 𝜃0 − α ×0
[1,0,1,1,0,0] 1 [0,0,0,0,0,0]×[1,0,1,1,0,0]=
-0.5 New 𝜽𝟎
0
[1,1,0,0,1,0] 0 [0,0,0,0,0,0]×[1,1,0,0,1,0]=
0.5
0
[1,1,0,1,0,1] 1 [0,0,0,0,0,0]×[1,1,0,1,0,1]=
-0.5
0
[1,1,0,1,1,0] 0 [0,0,0,0,0,0]×[1,1,0,1,1,0]=
0.5
0
𝑖=1
1 + 𝑒 −𝜽 𝑥
[1,1,1,0,1,1] 1 [0,0,0,0,0,0]×[1,1,1,0,1,1]=
-0.5
0
𝑻𝒉𝒆𝒏,
[1,0,0,1,1,0] 0 [0,0,0,0,0,0]×[1,0,0,1,1,0]=
0.5
0 𝜃0 = 𝜃0 − α ×0
[1,0,1,1,0,0] 1 [0,0,0,0,0,0]×[1,0,1,1,0,0]=
-0.5 Old 𝜽𝟎
0
[1,1,0,0,1,0] 0 [0,0,0,0,0,0]×[1,1,0,0,1,0]=
0.5
0
[1,1,0,1,0,1] 1 [0,0,0,0,0,0]×[1,1,0,1,0,1]=
-0.5
0
[1,1,0,1,1,0] 0 [0,0,0,0,0,0]×[1,1,0,1,1,0]=
0.5
0
𝑖=1
1 + 𝑒 −𝜽 𝑥
[1,1,1,0,1,1] 1 [0,0,0,0,0,0]×[1,1,1,0,1,1]=
-0.5
0
𝑻𝒉𝒆𝒏,
[1,0,0,1,1,0] 0 [0,0,0,0,0,0]×[1,0,0,1,1,0]=
0.5
0 𝜃0 = 𝜃0 − α ×0
[1,0,1,1,0,0] 1 [0,0,0,0,0,0]×[1,0,1,1,0,0]=
-0.5 = 0 − 0.5 × 𝟎 = 𝟎
0
[1,1,0,0,1,0] 0 [0,0,0,0,0,0]×[1,1,0,0,1,0]=
0.5
0
[1,1,0,1,0,1] 1 [0,0,0,0,0,0]×[1,1,0,1,0,1]= New Paramter Vector:
-0.5 𝜽 = [𝟎, 𝜽𝟏 , 𝜽𝟐 , 𝜽𝟑 , 𝜽𝟒 , 𝜽𝟓 ]
0
[1,1,0,1,1,0] 0 [0,0,0,0,0,0]×[1,1,0,1,1,0]=
0.5
0
𝑖=1
1 + 𝑒 −𝜽 𝑥
[1,1,1,0,1,1] 1 [0,0,0,0,0,0]×[1,1,1,0,1,1]=
-0.5
0
𝑻𝒉𝒆𝒏,
[1,0,0,1,1,0] 0 [0,0,0,0,0,0]×[1,0,0,1,1,0]=
0
0 𝜃1 = 𝜃1 − α ×0
[1,0,1,1,0,0] 1 [0,0,0,0,0,0]×[1,0,1,1,0,0]=
0 = 0 − 0.5 × 𝟎 = 𝟎
0
[1,1,0,0,1,0] 0 [0,0,0,0,0,0]×[1,1,0,0,1,0]=
0.5
0
[1,1,0,1,0,1] 1 [0,0,0,0,0,0]×[1,1,0,1,0,1]= New Paramter Vector:
-0.5 𝜽 = [𝟎, 𝟎, 𝜽𝟐 , 𝜽𝟑 , 𝜽𝟒 , 𝜽𝟓 ]
0
[1,1,0,1,1,0] 0 [0,0,0,0,0,0]×[1,1,0,1,1,0]=
0.5
0
𝑖=1
1+ 𝑒 −𝜽 𝑥
[1,1,1,0,1,1] 1 [0,0,0,0,0,0]×[1,1,1,0,1,1]=
-0.5
0
𝑻𝒉𝒆𝒏,
[1,0,0,1,1,0] 0 [0,0,0,0,0,0]×[1,0,0,1,1,0]=
0
0 𝜃2 = 𝜃2 − α × (−𝟏)
[1,0,1,1,0,0] 1 [0,0,0,0,0,0]×[1,0,1,1,0,0]=
-0.5 = 0 − 0.5 × −𝟏 = 𝟎. 𝟓
0
[1,1,0,0,1,0] 0 [0,0,0,0,0,0]×[1,1,0,0,1,0]=
0
0
[1,1,0,1,0,1] 1 [0,0,0,0,0,0]×[1,1,0,1,0,1]= New Paramter Vector:
0 𝜽 = [𝟎, 𝟎, 𝟎. 𝟓, 𝜽𝟑 , 𝜽𝟒 , 𝜽𝟓 ]
0
[1,1,0,1,1,0] 0 [0,0,0,0,0,0]×[1,1,0,1,1,0]=
0
0
𝑖=1
1 + 𝑒 −𝜽 𝑥
[1,1,1,0,1,1] 1 [0,0,0,0,0,0]×[1,1,1,0,1,1]=
0
0
𝑻𝒉𝒆𝒏,
[1,0,0,1,1,0] 0 [0,0,0,0,0,0]×[1,0,0,1,1,0]=
0.5
0 𝜃3 = 𝜃3 − α × 𝟎
[1,0,1,1,0,0] 1 [0,0,0,0,0,0]×[1,0,1,1,0,0]=
-0.5 = 0 − 0.5 × 0 = 𝟎
0
[1,1,0,0,1,0] 0 [0,0,0,0,0,0]×[1,1,0,0,1,0]=
0
0
[1,1,0,1,0,1] 1 [0,0,0,0,0,0]×[1,1,0,1,0,1]= New Paramter Vector:
-0.5 𝜽 = [𝟎, 𝟎, 𝟎. 𝟓, 𝟎, 𝜽𝟒 , 𝜽𝟓 ]
0
[1,1,0,1,1,0] 0 [0,0,0,0,0,0]×[1,1,0,1,1,0]=
0.5
0
𝑖=1
1 + 𝑒 −𝜽 𝑥
[1,1,1,0,1,1] 1 [0,0,0,0,0,0]×[1,1,1,0,1,1]=
-0.5
0
𝑻𝒉𝒆𝒏,
[1,0,0,1,1,0] 0 [0,0,0,0,0,0]×[1,0,0,1,1,0]=
0.5
0 𝜃4 = 𝜃4 − α × 𝟏
[1,0,1,1,0,0] 1 [0,0,0,0,0,0]×[1,0,1,1,0,0]=
0 = 0 − 0.5 × 1 = −𝟎. 𝟓
0
[1,1,0,0,1,0] 0 [0,0,0,0,0,0]×[1,1,0,0,1,0]=
0.5
0
[1,1,0,1,0,1] 1 [0,0,0,0,0,0]×[1,1,0,1,0,1]= New Paramter Vector:
0 𝜽 = [𝟎, 𝟎, 𝟎. 𝟓, 𝟎, −𝟎. 𝟓, 𝜽𝟓 ]
0
[1,1,0,1,1,0] 0 [0,0,0,0,0,0]×[1,1,0,1,1,0]=
0.5
0
𝑖=1
1+ 𝑒 −𝜽 𝑥
[1,1,1,0,1,1] 1 [0,0,0,0,0,0]×[1,1,1,0,1,1]=
0 -0.5 𝑻𝒉𝒆𝒏,
[1,0,0,1,1,0] 0 [0,0,0,0,0,0]×[1,0,0,1,1,0]=
0 0 𝜃5 = 𝜃5 − α × (−𝟏)
[1,0,1,1,0,0] 1 [0,0,0,0,0,0]×[1,0,1,1,0,0]=
0 0 = 0 − 0.5 × (−1) = 𝟎. 𝟓
[1,1,0,0,1,0] 0 [0,0,0,0,0,0]×[1,1,0,0,1,0]=
0 0
[1,1,0,1,0,1] 1 [0,0,0,0,0,0]×[1,1,0,1,0,1]= New Paramter Vector:
0 -0.5 𝜽 = [𝟎, 𝟎, 𝟎. 𝟓, 𝟎, −𝟎. 𝟓, 𝟎. 𝟓]
[1,1,0,1,1,0] 0 [0,0,0,0,0,0]×[1,1,0,1,1,0]=
0 0
𝒙 𝒚 𝜽𝑻 𝒙 𝒉𝜽 𝒙 = (
1
𝑇 )
Predicted Class (or 𝒚’)
1+𝑒 −𝜽 𝒙
[1,1,1,0,1,1 1 [0,0,0.5,0,-0.5,0.5]×[1,1,1,0,1,1]=0.5
0.622459331
]
[1,0,0,1,1,0 0 [0,0,0.5,0,-0.5,0.5]×[1,0,0,1,1,0]=-0.5
0.377540669
]
[1,0,1,1,0,0 1 [0,0,0.5,0,-0.5,0.5]×[1,0,1,1,0,0]=0.5
0.622459331
]
[1,1,0,0,1,0 0 [0,0,0.5,0,-0.5,0.5]×[1,1,0,0,1,0]=-0.5
0.377540669
]
[1,1,0,1,0,1 1 [0,0,0.5,0,-0.5,0.5]×[1,1,0,1,0,1]=0.5
0.622459331
]
[1,1,0,1,1,0 0 [0,0,0.5,0,-0.5,0.5]×[1,1,0,1,1,0]=-0.5
0.377540669
]
𝒙 𝒚 𝜽𝑻 𝒙 𝒉𝜽 𝒙 = (
1
𝑇 )
Predicted Class (or 𝒚’)
1+𝑒 −𝜽 𝒙
[1,1,1,0,1,1] 1 [0,0,0.5,0,-0.5,0.5]×[1,1,1,0,1,1]=0.5 0.622459331 1
[1,0,0,1,1,0] 0 [0,0,0.5,0,-0.5,0.5]×[1,0,0,1,1,0]=-0.5 0.377540669 0
[1,0,1,1,0,0] 1 [0,0,0.5,0,-0.5,0.5]×[1,0,1,1,0,0]=0.5 0.622459331 1
[1,1,0,0,1,0] 0 [0,0,0.5,0,-0.5,0.5]×[1,1,0,0,1,0]=-0.5 0.377540669 0
[1,1,0,1,0,1] 1 [0,0,0.5,0,-0.5,0.5]×[1,1,0,1,0,1]=0.5 0.622459331 1
[1,1,0,1,1,0] 0 [0,0,0.5,0,-0.5,0.5]×[1,1,0,1,1,0]=-0.5 0.377540669 0
𝒙 𝒚 𝜽𝑻 𝒙 𝒉𝜽 𝒙 = (
1
𝑇 )
Predicted Class (or 𝒚’)
1+𝑒 −𝜽 𝒙
[1,1,1,0,1,1 1 [0,0,0.5,0,-0.5,0.5]×[1,1,1,0,1,1]=0.5
0.622459331 1
]
[1,0,0,1,1,0 0 [0,0,0.5,0,-0.5,0.5]×[1,0,0,1,1,0]=-0.5
0.377540669 0
]
NO
[1,0,1,1,0,0 1 [0,0,0.5,0,-0.5,0.5]×[1,0,1,1,0,0]=0.5 Mispredictions!
0.622459331 1
]
[1,1,0,0,1,0 0 [0,0,0.5,0,-0.5,0.5]×[1,1,0,0,1,0]=-0.5
0.377540669 0
]
[1,1,0,1,0,1 1 [0,0,0.5,0,-0.5,0.5]×[1,1,0,1,0,1]=0.5
0.622459331 1
]
[1,1,0,1,1,0 0 [0,0,0.5,0,-0.5,0.5]×[1,1,0,1,1,0]=-0.5
0.377540669 0
]