Skip to content

Commit 82a7da7

Browse files
committed
Pushing the docs to dev/ for branch: master, commit 2dc226154769841d07601c19b089086f4aa31cf9
1 parent 3d282dc commit 82a7da7

File tree

1,075 files changed

+3615
-3772
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,075 files changed

+3615
-3772
lines changed
-2.73 KB
Binary file not shown.
-2.65 KB
Binary file not shown.
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"metadata": {
7+
"collapsed": false
8+
},
9+
"outputs": [],
10+
"source": [
11+
"%matplotlib inline"
12+
]
13+
},
14+
{
15+
"cell_type": "markdown",
16+
"metadata": {},
17+
"source": [
18+
"\n# Visualizing the stock market structure\n\n\nThis example employs several unsupervised learning techniques to extract\nthe stock market structure from variations in historical quotes.\n\nThe quantity that we use is the daily variation in quote price: quotes\nthat are linked tend to cofluctuate during a day.\n\n\nLearning a graph structure\n--------------------------\n\nWe use sparse inverse covariance estimation to find which quotes are\ncorrelated conditionally on the others. Specifically, sparse inverse\ncovariance gives us a graph, that is a list of connection. For each\nsymbol, the symbols that it is connected too are those useful to explain\nits fluctuations.\n\nClustering\n----------\n\nWe use clustering to group together quotes that behave similarly. Here,\namongst the `various clustering techniques <clustering>` available\nin the scikit-learn, we use `affinity_propagation` as it does\nnot enforce equal-size clusters, and it can choose automatically the\nnumber of clusters from the data.\n\nNote that this gives us a different indication than the graph, as the\ngraph reflects conditional relations between variables, while the\nclustering reflects marginal properties: variables clustered together can\nbe considered as having a similar impact at the level of the full stock\nmarket.\n\nEmbedding in 2D space\n---------------------\n\nFor visualization purposes, we need to lay out the different symbols on a\n2D canvas. For this we use `manifold` techniques to retrieve 2D\nembedding.\n\n\nVisualization\n-------------\n\nThe output of the 3 models are combined in a 2D graph where nodes\nrepresents the stocks and edges the:\n\n- cluster labels are used to define the color of the nodes\n- the sparse covariance model is used to display the strength of the edges\n- the 2D embedding is used to position the nodes in the plan\n\nThis example has a fair amount of visualization-related code, as\nvisualization is crucial here to display the graph. One of the challenge\nis to position the labels minimizing overlap. For this we use an\nheuristic based on the direction of the nearest neighbor along each\naxis.\n\n"
19+
]
20+
},
21+
{
22+
"cell_type": "code",
23+
"execution_count": null,
24+
"metadata": {
25+
"collapsed": false
26+
},
27+
"outputs": [],
28+
"source": [
29+
"from __future__ import print_function\n\n# Author: Gael Varoquaux gael.varoquaux@normalesup.org\n# License: BSD 3 clause\n\nimport sys\nfrom datetime import datetime\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.collections import LineCollection\n\nimport pandas as pd\n\nfrom sklearn import cluster, covariance, manifold\n\nprint(__doc__)\n\n\n# #############################################################################\n# Retrieve the data from Internet\n\n# The data is from 2003 - 2008. This is reasonably calm: (not too long ago so\n# that we get high-tech firms, and before the 2008 crash). This kind of\n# historical data can be obtained for from APIs like the quandl.com and\n# alphavantage.co ones.\nstart_date = datetime(2003, 1, 1).date()\nend_date = datetime(2008, 1, 1).date()\n\nsymbol_dict = {\n 'TOT': 'Total',\n 'XOM': 'Exxon',\n 'CVX': 'Chevron',\n 'COP': 'ConocoPhillips',\n 'VLO': 'Valero Energy',\n 'MSFT': 'Microsoft',\n 'IBM': 'IBM',\n 'TWX': 'Time Warner',\n 'CMCSA': 'Comcast',\n 'CVC': 'Cablevision',\n 'YHOO': 'Yahoo',\n 'DELL': 'Dell',\n 'HPQ': 'HP',\n 'AMZN': 'Amazon',\n 'TM': 'Toyota',\n 'CAJ': 'Canon',\n 'SNE': 'Sony',\n 'F': 'Ford',\n 'HMC': 'Honda',\n 'NAV': 'Navistar',\n 'NOC': 'Northrop Grumman',\n 'BA': 'Boeing',\n 'KO': 'Coca Cola',\n 'MMM': '3M',\n 'MCD': 'McDonald\\'s',\n 'PEP': 'Pepsi',\n 'K': 'Kellogg',\n 'UN': 'Unilever',\n 'MAR': 'Marriott',\n 'PG': 'Procter Gamble',\n 'CL': 'Colgate-Palmolive',\n 'GE': 'General Electrics',\n 'WFC': 'Wells Fargo',\n 'JPM': 'JPMorgan Chase',\n 'AIG': 'AIG',\n 'AXP': 'American express',\n 'BAC': 'Bank of America',\n 'GS': 'Goldman Sachs',\n 'AAPL': 'Apple',\n 'SAP': 'SAP',\n 'CSCO': 'Cisco',\n 'TXN': 'Texas Instruments',\n 'XRX': 'Xerox',\n 'WMT': 'Wal-Mart',\n 'HD': 'Home Depot',\n 'GSK': 'GlaxoSmithKline',\n 'PFE': 'Pfizer',\n 'SNY': 'Sanofi-Aventis',\n 'NVS': 'Novartis',\n 'KMB': 'Kimberly-Clark',\n 'R': 'Ryder',\n 'GD': 'General Dynamics',\n 'RTN': 'Raytheon',\n 'CVS': 'CVS',\n 'CAT': 'Caterpillar',\n 'DD': 'DuPont de Nemours'}\n\n\nsymbols, names = np.array(sorted(symbol_dict.items())).T\n\nquotes = []\n\nfor symbol in symbols:\n print('Fetching quote history for %r' % symbol, file=sys.stderr)\n url = ('https://raw.githubusercontent.com/scikit-learn/examples-data/'\n 'master/financial-data/{}.csv')\n quotes.append(pd.read_csv(url.format(symbol)))\n\nclose_prices = np.vstack([q['close'] for q in quotes])\nopen_prices = np.vstack([q['open'] for q in quotes])\n\n# The daily variations of the quotes are what carry most information\nvariation = close_prices - open_prices\n\n\n# #############################################################################\n# Learn a graphical structure from the correlations\nedge_model = covariance.GraphicalLassoCV()\n\n# standardize the time series: using correlations rather than covariance\n# is more efficient for structure recovery\nX = variation.copy().T\nX /= X.std(axis=0)\nedge_model.fit(X)\n\n# #############################################################################\n# Cluster using affinity propagation\n\n_, labels = cluster.affinity_propagation(edge_model.covariance_)\nn_labels = labels.max()\n\nfor i in range(n_labels + 1):\n print('Cluster %i: %s' % ((i + 1), ', '.join(names[labels == i])))\n\n# #############################################################################\n# Find a low-dimension embedding for visualization: find the best position of\n# the nodes (the stocks) on a 2D plane\n\n# We use a dense eigen_solver to achieve reproducibility (arpack is\n# initiated with random vectors that we don't control). In addition, we\n# use a large number of neighbors to capture the large-scale structure.\nnode_position_model = manifold.LocallyLinearEmbedding(\n n_components=2, eigen_solver='dense', n_neighbors=6)\n\nembedding = node_position_model.fit_transform(X.T).T\n\n# #############################################################################\n# Visualization\nplt.figure(1, facecolor='w', figsize=(10, 8))\nplt.clf()\nax = plt.axes([0., 0., 1., 1.])\nplt.axis('off')\n\n# Display a graph of the partial correlations\npartial_correlations = edge_model.precision_.copy()\nd = 1 / np.sqrt(np.diag(partial_correlations))\npartial_correlations *= d\npartial_correlations *= d[:, np.newaxis]\nnon_zero = (np.abs(np.triu(partial_correlations, k=1)) > 0.02)\n\n# Plot the nodes using the coordinates of our embedding\nplt.scatter(embedding[0], embedding[1], s=100 * d ** 2, c=labels,\n cmap=plt.cm.nipy_spectral)\n\n# Plot the edges\nstart_idx, end_idx = np.where(non_zero)\n# a sequence of (*line0*, *line1*, *line2*), where::\n# linen = (x0, y0), (x1, y1), ... (xm, ym)\nsegments = [[embedding[:, start], embedding[:, stop]]\n for start, stop in zip(start_idx, end_idx)]\nvalues = np.abs(partial_correlations[non_zero])\nlc = LineCollection(segments,\n zorder=0, cmap=plt.cm.hot_r,\n norm=plt.Normalize(0, .7 * values.max()))\nlc.set_array(values)\nlc.set_linewidths(15 * values)\nax.add_collection(lc)\n\n# Add a label to each node. The challenge here is that we want to\n# position the labels to avoid overlap with other labels\nfor index, (name, label, (x, y)) in enumerate(\n zip(names, labels, embedding.T)):\n\n dx = x - embedding[0]\n dx[index] = 1\n dy = y - embedding[1]\n dy[index] = 1\n this_dx = dx[np.argmin(np.abs(dy))]\n this_dy = dy[np.argmin(np.abs(dx))]\n if this_dx > 0:\n horizontalalignment = 'left'\n x = x + .002\n else:\n horizontalalignment = 'right'\n x = x - .002\n if this_dy > 0:\n verticalalignment = 'bottom'\n y = y + .002\n else:\n verticalalignment = 'top'\n y = y - .002\n plt.text(x, y, name, size=10,\n horizontalalignment=horizontalalignment,\n verticalalignment=verticalalignment,\n bbox=dict(facecolor='w',\n edgecolor=plt.cm.nipy_spectral(label / float(n_labels)),\n alpha=.6))\n\nplt.xlim(embedding[0].min() - .15 * embedding[0].ptp(),\n embedding[0].max() + .10 * embedding[0].ptp(),)\nplt.ylim(embedding[1].min() - .03 * embedding[1].ptp(),\n embedding[1].max() + .03 * embedding[1].ptp())\n\nplt.show()"
30+
]
31+
}
32+
],
33+
"metadata": {
34+
"kernelspec": {
35+
"display_name": "Python 3",
36+
"language": "python",
37+
"name": "python3"
38+
},
39+
"language_info": {
40+
"codemirror_mode": {
41+
"name": "ipython",
42+
"version": 3
43+
},
44+
"file_extension": ".py",
45+
"mimetype": "text/x-python",
46+
"name": "python",
47+
"nbconvert_exporter": "python",
48+
"pygments_lexer": "ipython3",
49+
"version": "3.6.5"
50+
}
51+
},
52+
"nbformat": 4,
53+
"nbformat_minor": 0
54+
}

dev/_downloads/skip_stock_market.py renamed to dev/_downloads/plot_stock_market.py

Lines changed: 66 additions & 134 deletions
Original file line numberDiff line numberDiff line change
@@ -70,160 +70,92 @@
7070
import numpy as np
7171
import matplotlib.pyplot as plt
7272
from matplotlib.collections import LineCollection
73-
from six.moves.urllib.request import urlopen
74-
from six.moves.urllib.parse import urlencode
75-
from sklearn import cluster, covariance, manifold
7673

77-
print(__doc__)
74+
import pandas as pd
7875

76+
from sklearn import cluster, covariance, manifold
7977

80-
def retry(f, n_attempts=3):
81-
"Wrapper function to retry function calls in case of exceptions"
82-
def wrapper(*args, **kwargs):
83-
for i in range(n_attempts):
84-
try:
85-
return f(*args, **kwargs)
86-
except Exception:
87-
if i == n_attempts - 1:
88-
raise
89-
return wrapper
90-
91-
92-
def quotes_historical_google(symbol, start_date, end_date):
93-
"""Get the historical data from Google finance.
94-
95-
Parameters
96-
----------
97-
symbol : str
98-
Ticker symbol to query for, for example ``"DELL"``.
99-
start_date : datetime.datetime
100-
Start date.
101-
end_date : datetime.datetime
102-
End date.
103-
104-
Returns
105-
-------
106-
X : array
107-
The columns are ``date`` -- date, ``open``, ``high``,
108-
``low``, ``close`` and ``volume`` of type float.
109-
"""
110-
params = {
111-
'q': symbol,
112-
'startdate': start_date.strftime('%Y-%m-%d'),
113-
'enddate': end_date.strftime('%Y-%m-%d'),
114-
'output': 'csv',
115-
}
116-
url = 'https://finance.google.com/finance/historical?' + urlencode(params)
117-
response = urlopen(url)
118-
dtype = {
119-
'names': ['date', 'open', 'high', 'low', 'close', 'volume'],
120-
'formats': ['object', 'f4', 'f4', 'f4', 'f4', 'f4']
121-
}
122-
converters = {
123-
0: lambda s: datetime.strptime(s.decode(), '%d-%b-%y').date()}
124-
data = np.genfromtxt(response, delimiter=',', skip_header=1,
125-
dtype=dtype, converters=converters,
126-
missing_values='-', filling_values=-1)
127-
min_date = min(data['date']) if len(data) else datetime.min.date()
128-
max_date = max(data['date']) if len(data) else datetime.max.date()
129-
start_end_diff = (end_date - start_date).days
130-
min_max_diff = (max_date - min_date).days
131-
data_is_fine = (
132-
start_date <= min_date <= end_date and
133-
start_date <= max_date <= end_date and
134-
start_end_diff - 7 <= min_max_diff <= start_end_diff)
135-
136-
if not data_is_fine:
137-
message = (
138-
'Data looks wrong for symbol {}, url {}\n'
139-
' - start_date: {}, end_date: {}\n'
140-
' - min_date: {}, max_date: {}\n'
141-
' - start_end_diff: {}, min_max_diff: {}'.format(
142-
symbol, url,
143-
start_date, end_date,
144-
min_date, max_date,
145-
start_end_diff, min_max_diff))
146-
raise RuntimeError(message)
147-
return data
78+
print(__doc__)
14879

14980

15081
# #############################################################################
15182
# Retrieve the data from Internet
15283

153-
# Choose a time period reasonably calm (not too long ago so that we get
154-
# high-tech firms, and before the 2008 crash)
84+
# The data is from 2003 - 2008. This is reasonably calm: (not too long ago so
85+
# that we get high-tech firms, and before the 2008 crash). This kind of
86+
# historical data can be obtained for from APIs like the quandl.com and
87+
# alphavantage.co ones.
15588
start_date = datetime(2003, 1, 1).date()
15689
end_date = datetime(2008, 1, 1).date()
15790

15891
symbol_dict = {
159-
'NYSE:TOT': 'Total',
160-
'NYSE:XOM': 'Exxon',
161-
'NYSE:CVX': 'Chevron',
162-
'NYSE:COP': 'ConocoPhillips',
163-
'NYSE:VLO': 'Valero Energy',
164-
'NASDAQ:MSFT': 'Microsoft',
165-
'NYSE:IBM': 'IBM',
166-
'NYSE:TWX': 'Time Warner',
167-
'NASDAQ:CMCSA': 'Comcast',
168-
'NYSE:CVC': 'Cablevision',
169-
'NASDAQ:YHOO': 'Yahoo',
170-
'NASDAQ:DELL': 'Dell',
171-
'NYSE:HPQ': 'HP',
172-
'NASDAQ:AMZN': 'Amazon',
173-
'NYSE:TM': 'Toyota',
174-
'NYSE:CAJ': 'Canon',
175-
'NYSE:SNE': 'Sony',
176-
'NYSE:F': 'Ford',
177-
'NYSE:HMC': 'Honda',
178-
'NYSE:NAV': 'Navistar',
179-
'NYSE:NOC': 'Northrop Grumman',
180-
'NYSE:BA': 'Boeing',
181-
'NYSE:KO': 'Coca Cola',
182-
'NYSE:MMM': '3M',
183-
'NYSE:MCD': 'McDonald\'s',
184-
'NYSE:PEP': 'Pepsi',
185-
'NYSE:K': 'Kellogg',
186-
'NYSE:UN': 'Unilever',
187-
'NASDAQ:MAR': 'Marriott',
188-
'NYSE:PG': 'Procter Gamble',
189-
'NYSE:CL': 'Colgate-Palmolive',
190-
'NYSE:GE': 'General Electrics',
191-
'NYSE:WFC': 'Wells Fargo',
192-
'NYSE:JPM': 'JPMorgan Chase',
193-
'NYSE:AIG': 'AIG',
194-
'NYSE:AXP': 'American express',
195-
'NYSE:BAC': 'Bank of America',
196-
'NYSE:GS': 'Goldman Sachs',
197-
'NASDAQ:AAPL': 'Apple',
198-
'NYSE:SAP': 'SAP',
199-
'NASDAQ:CSCO': 'Cisco',
200-
'NASDAQ:TXN': 'Texas Instruments',
201-
'NYSE:XRX': 'Xerox',
202-
'NYSE:WMT': 'Wal-Mart',
203-
'NYSE:HD': 'Home Depot',
204-
'NYSE:GSK': 'GlaxoSmithKline',
205-
'NYSE:PFE': 'Pfizer',
206-
'NYSE:SNY': 'Sanofi-Aventis',
207-
'NYSE:NVS': 'Novartis',
208-
'NYSE:KMB': 'Kimberly-Clark',
209-
'NYSE:R': 'Ryder',
210-
'NYSE:GD': 'General Dynamics',
211-
'NYSE:RTN': 'Raytheon',
212-
'NYSE:CVS': 'CVS',
213-
'NYSE:CAT': 'Caterpillar',
214-
'NYSE:DD': 'DuPont de Nemours'}
92+
'TOT': 'Total',
93+
'XOM': 'Exxon',
94+
'CVX': 'Chevron',
95+
'COP': 'ConocoPhillips',
96+
'VLO': 'Valero Energy',
97+
'MSFT': 'Microsoft',
98+
'IBM': 'IBM',
99+
'TWX': 'Time Warner',
100+
'CMCSA': 'Comcast',
101+
'CVC': 'Cablevision',
102+
'YHOO': 'Yahoo',
103+
'DELL': 'Dell',
104+
'HPQ': 'HP',
105+
'AMZN': 'Amazon',
106+
'TM': 'Toyota',
107+
'CAJ': 'Canon',
108+
'SNE': 'Sony',
109+
'F': 'Ford',
110+
'HMC': 'Honda',
111+
'NAV': 'Navistar',
112+
'NOC': 'Northrop Grumman',
113+
'BA': 'Boeing',
114+
'KO': 'Coca Cola',
115+
'MMM': '3M',
116+
'MCD': 'McDonald\'s',
117+
'PEP': 'Pepsi',
118+
'K': 'Kellogg',
119+
'UN': 'Unilever',
120+
'MAR': 'Marriott',
121+
'PG': 'Procter Gamble',
122+
'CL': 'Colgate-Palmolive',
123+
'GE': 'General Electrics',
124+
'WFC': 'Wells Fargo',
125+
'JPM': 'JPMorgan Chase',
126+
'AIG': 'AIG',
127+
'AXP': 'American express',
128+
'BAC': 'Bank of America',
129+
'GS': 'Goldman Sachs',
130+
'AAPL': 'Apple',
131+
'SAP': 'SAP',
132+
'CSCO': 'Cisco',
133+
'TXN': 'Texas Instruments',
134+
'XRX': 'Xerox',
135+
'WMT': 'Wal-Mart',
136+
'HD': 'Home Depot',
137+
'GSK': 'GlaxoSmithKline',
138+
'PFE': 'Pfizer',
139+
'SNY': 'Sanofi-Aventis',
140+
'NVS': 'Novartis',
141+
'KMB': 'Kimberly-Clark',
142+
'R': 'Ryder',
143+
'GD': 'General Dynamics',
144+
'RTN': 'Raytheon',
145+
'CVS': 'CVS',
146+
'CAT': 'Caterpillar',
147+
'DD': 'DuPont de Nemours'}
215148

216149

217150
symbols, names = np.array(sorted(symbol_dict.items())).T
218151

219-
# retry is used because quotes_historical_google can temporarily fail
220-
# for various reasons (e.g. empty result from Google API).
221152
quotes = []
222153

223154
for symbol in symbols:
224155
print('Fetching quote history for %r' % symbol, file=sys.stderr)
225-
quotes.append(retry(quotes_historical_google)(
226-
symbol, start_date, end_date))
156+
url = ('https://raw.githubusercontent.com/scikit-learn/examples-data/'
157+
'master/financial-data/{}.csv')
158+
quotes.append(pd.read_csv(url.format(symbol)))
227159

228160
close_prices = np.vstack([q['close'] for q in quotes])
229161
open_prices = np.vstack([q['open'] for q in quotes])

dev/_downloads/scikit-learn-docs.pdf

-45.4 KB
Binary file not shown.

0 commit comments

Comments
 (0)