Skip to content

Commit

Permalink
Merge pull request #93 from flask-dashboard/refactor
Browse files Browse the repository at this point in the history
Refactor
  • Loading branch information
bogdanp05 authored Apr 27, 2018
2 parents aec3370 + 2a2e264 commit 0ce6ac8
Show file tree
Hide file tree
Showing 132 changed files with 3,010 additions and 49,717 deletions.
8 changes: 7 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,11 @@ python:
- "3.5"
- "3.6"

install:
- pip install codecov

script:
- python setup.py test
- coverage run setup.py test

after_success:
- codecov
12 changes: 12 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,19 @@ Please note that the changes before version 1.10.0 have not been documented.

Unreleased
----------
Changed

- Removed two graphs: hits per hour and execution time per hour

- New template

- Refactored code

Fixed issues:
- #63
- #80
- #89
-

v1.11.0
-------
Expand Down
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
recursive-include flask_monitoringdashboard/static *
recursive-include flask_monitoringdashboard/templates *
recursive-include flask_monitoringdashboard/test *
include requirements.txt
include README.md
include CHANGELOG.rst
10 changes: 6 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,10 @@
# Flask Monitoring Dashboard
[![Build Status](https://travis-ci.org/flask-dashboard/Flask-MonitoringDashboard.svg?branch=master)](https://travis-ci.org/flask-dashboard/Flask-MonitoringDashboard)
[![Documentation Status](https://readthedocs.org/projects/flask-monitoringdashboard/badge/?version=latest)](http://flask-monitoringdashboard.readthedocs.io/en/latest/?badge=latest)
[![codecov](https://codecov.io/gh/flask-dashboard/Flask-MonitoringDashboard/branch/master/graph/badge.svg)](https://codecov.io/gh/flask-dashboard/Flask-MonitoringDashboard)
[![PyPI version](https://badge.fury.io/py/Flask-MonitoringDashboard.svg)](https://badge.fury.io/py/Flask-MonitoringDashboard)
[![Py-version](https://img.shields.io/pypi/pyversions/flask_monitoringdashboard.svg)](https://img.shields.io/pypi/pyversions/flask_monitoringdashboard.svg)

Dashboard for automatic monitoring of Flask web-services.

The Flask Monitoring Dashboard is an extension that offers four main functionalities with little effort from the Flask developer:
Expand All @@ -19,10 +25,6 @@ You can view the results by default using the default endpoint (this can be conf

For a more advanced documentation, take a look at the information on [this site](http://flask-monitoringdashboard.readthedocs.io/en/latest/functionality.html).

### Status
[![Build Status](https://travis-ci.org/flask-dashboard/Flask-MonitoringDashboard.svg?branch=master)](https://travis-ci.org/flask-dashboard/Flask-MonitoringDashboard.svg?branch=master)
[![Documentation Status](https://readthedocs.org/projects/flask-monitoringdashboard/badge/?version=latest)](http://flask-monitoringdashboard.readthedocs.io/en/latest/?badge=latest)

## Installation
To install from source, download the source code, then run this:

Expand Down
1 change: 0 additions & 1 deletion TODO.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ Features to be implemented
- Page '/result/<endpoint>/time_per_version' - Max 10 versions per plot.
- Page '/result/<endpoint>/outliers' - Max 20 results per table.

[ ] Refactor all measurement-endpoints in a Blueprint

Work in progress
----------------
Expand Down
12 changes: 1 addition & 11 deletions docs/configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,6 @@ the entry point of the app. The following things can be configured:
OUTLIER_DETECTION_CONSTANT=2.5
DASHBOARD_ENABLED = True
TEST_DIR=/<path to your project>/tests/
N=5
SUBMIT_RESULTS_URL=http://0.0.0.0:5000/dashboard/submit-test-results
COLORS={'main':[0,97,255], 'static':[255,153,0]}
This might look a bit overwhelming, but the following list explains everything in detail:
Expand Down Expand Up @@ -95,15 +93,7 @@ This might look a bit overwhelming, but the following list explains everything i
the expected overhead is a bit larger, as you can find
`here <https://github.com/flask-dashboard/Testing-Dashboard-Overhead>`_.

- **TEST_DIR**, **N**, **SUBMIT_RESULTS_URL:**
To enable Travis to run your unit tests and send the results to the dashboard, you have to set those values:

- **TEST_DIR** specifies where the unit tests reside.

- **SUBMIT_RESULTS_URL** specifies where Travis should upload the test results to. When left out, the results will
not be sent anywhere, but the performance collection process will still run.

- **N** specifies the number of times Travis should run each unit test.
- **TEST_DIR:** Specifies where the unit tests reside. This will show up in the configuration in the Dashboard.

- **COLORS:** The endpoints are automatically hashed into a color.
However, if you want to specify a different color for an endpoint, you can set this variable.
Expand Down
28 changes: 9 additions & 19 deletions docs/functionality.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,38 +93,28 @@ Using the collected data, a number of observations can be made:

Test-Coverage Monitoring
------------------------
To enable Travis to run your unit tests and send the results to the dashboard, four steps have to be taken:
To enable Travis to run your unit tests and send the results to the dashboard, three steps have to be taken:

1. Update the config file ('config.cfg') to include three additional values, `TEST_DIR`, `SUBMIT_RESULTS_URL` and `N`.

- **TEST_DIR** specifies where the unit tests reside.

- **SUBMIT_RESULTS_URL** specifies where Travis should upload the test results to. When left out, the results will
not be sent anywhere, but the performance collection process will still run.

- **N** specifies the number of times Travis should run each unit test.

2. The installation requirement for the dashboard has to be added to the `setup.py` file of your app:
1. The installation requirement for the dashboard has to be added to the `setup.py` file of your app:

.. code-block:: python
dependency_links=["https://github.com/flask-dashboard/Flask-MonitoringDashboard/tarball/master#egg=flask_monitoringdashboard"]
install_requires=('flask_monitoringdashboard')
3. In your `.travis.yml` file, three script commands should be added:
2. In your `.travis.yml` file, one script command should be added:

.. code-block:: bash
export DASHBOARD_CONFIG=./config.cfg
export DASHBOARD_LOG_DIR=./logs/
python -m flask_monitoringdashboard.collect_performance
python -m flask_monitoringdashboard.collect_performance --test_folder=./tests --times=5 --url=https://yourdomain.org/dashboard
The config environment variable specifies where the performance collection process can find the config file.
The log directory environment variable specifies where the performance collection process should place the logs it uses.
The third command will start the actual performance collection process.
The `test_folder` argument specifies where the performance collection process can find the unit tests to use.
The `times` argument (optional, default: 5) specifies how many times to run each of the unit tests.
The `url` argument (optional) specifies where the dashboard is that needs to receive the performance results.
When the last argument is omitted, the performance testing will run, but without publishing the results.

4. A method that is executed after every request should be added to the blueprint of your app.
3. A method that is executed after every request should be added to the blueprint of your app.
This is done by the dashboard automatically when the blueprint is passed to the binding function like so:

.. code-block:: python
Expand Down
18 changes: 9 additions & 9 deletions flask_monitoringdashboard/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,10 @@
"""

import os

from flask import Blueprint
from flask_monitoringdashboard.config import Config

from flask_monitoringdashboard.core.config import Config

config = Config()
user_app = None
Expand Down Expand Up @@ -49,22 +51,20 @@ def bind(app, blue_print=None):
import os
import datetime
from flask import request
log_dir = os.getenv('DASHBOARD_LOG_DIR')

@blue_print.after_request
def after_request(response):
if log_dir:
t1 = str(datetime.datetime.now())
log = open(log_dir + "endpoint_hits.log", "a")
log.write("\"{}\",\"{}\"\n".format(t1, request.endpoint))
log.close()
hit_time_stamp = str(datetime.datetime.now())
log = open("endpoint_hits.log", "a")
log.write('"{}","{}"\n'.format(hit_time_stamp, request.endpoint))
log.close()
return response

# Add all route-functions to the blueprint
import flask_monitoringdashboard.routings
import flask_monitoringdashboard.views

# Add wrappers to the endpoints that have to be monitored
from flask_monitoringdashboard.measurement import init_measurement
from flask_monitoringdashboard.core.measurement import init_measurement
blueprint.before_app_first_request(init_measurement)

# register the blueprint to the app
Expand Down
150 changes: 68 additions & 82 deletions flask_monitoringdashboard/collect_performance.py
Original file line number Diff line number Diff line change
@@ -1,103 +1,89 @@
import requests
import configparser
import time
import datetime
import os
import sys
import argparse
import csv
import datetime
import time
from unittest import TestLoader

# Abort if config file is not specified.
config = os.getenv('DASHBOARD_CONFIG')
if config is None:
print('You must specify a config file for the dashboard to be able to use the unit test monitoring functionality.')
print('Please set an environment variable \'DASHBOARD_CONFIG\' specifying the absolute path to your config file.')
sys.exit(0)

# Abort if log directory is not specified.
log_dir = os.getenv('DASHBOARD_LOG_DIR')
if log_dir is None:
print('You must specify a log directory for the dashboard to be able to use the unit test monitoring '
'functionality.')
print('Please set an environment variable \'DASHBOARD_LOG_DIR\' specifying the absolute path where you want the '
'log files to be placed.')
sys.exit(0)
import requests

n = 1
url = None
sys.path.insert(0, os.getcwd())
parser = configparser.RawConfigParser()
try:
parser.read(config)
if parser.has_option('dashboard', 'N'):
n = int(parser.get('dashboard', 'N'))
if parser.has_option('dashboard', 'TEST_DIR'):
test_dir = parser.get('dashboard', 'TEST_DIR')
else:
print('No test directory specified in your config file. Please do so.')
sys.exit(0)
if parser.has_option('dashboard', 'SUBMIT_RESULTS_URL'):
url = parser.get('dashboard', 'SUBMIT_RESULTS_URL')
else:
print('No url specified in your config file for submitting test results. Please do so.')
except configparser.Error as e:
print("Something went wrong while parsing the configuration file:\n{}".format(e))
# Parsing the arguments.
parser = argparse.ArgumentParser(description='Collecting performance results from the unit tests of a project.')
parser.add_argument('--test_folder', dest='test_folder', required=True,
help='folder in which the unit tests can be found (example: ./tests)')
parser.add_argument('--times', dest='times', default=5,
help='number of times to execute every unit test (default: 5)')
parser.add_argument('--url', dest='url', default=None,
help='url of the Dashboard to submit the performance results to')
args = parser.parse_args()
print('Starting the collection of performance results with the following settings:')
print(' - folder containing unit tests: ', args.test_folder)
print(' - number of times to run tests: ', args.times)
print(' - url to submit the results to: ', args.url)
if not args.url:
print('The performance results will not be submitted.')

# Initialize result dictionary and logs.
data = {'test_runs': [], 'grouped_tests': []}
log = open(log_dir + "endpoint_hits.log", "w")
log.write("\"time\",\"endpoint\"\n")
log = open('endpoint_hits.log', 'w')
log.write('"time","endpoint"\n')
log.close()
log = open(log_dir + "test_runs.log", "w")
log.write("\"start_time\",\"stop_time\",\"test_name\"\n")

if test_dir:
suites = TestLoader().discover(test_dir, pattern="*test*.py")
for i in range(n):
for suite in suites:
for case in suite:
for test in case:
result = None
t1 = str(datetime.datetime.now())
time1 = time.time()
result = test.run(result)
time2 = time.time()
t2 = str(datetime.datetime.now())
log.write("\"{}\",\"{}\",\"{}\"\n".format(t1, t2, str(test)))
t = (time2 - time1) * 1000
data['test_runs'].append({'name': str(test), 'exec_time': t, 'time': str(datetime.datetime.now()),
'successful': result.wasSuccessful(), 'iter': i + 1})
log = open('test_runs.log', 'w')
log.write('"start_time","stop_time","test_name"\n')

# Find the tests and execute them the specified number of times.
# Add the performance results to the result dictionary.
suites = TestLoader().discover(args.test_folder, pattern="*test*.py")
for iteration in range(args.times):
for suite in suites:
for case in suite:
for test in case:
test_result = None
start_time_stamp = str(datetime.datetime.now())
time_before = time.time()
test_result = test.run(test_result)
time_after = time.time()
end_time_stamp = str(datetime.datetime.now())
log.write('"{}","{}","{}"\n'.format(start_time_stamp, end_time_stamp, str(test)))
execution_time = (time_after - time_before) * 1000
data['test_runs'].append(
{'name': str(test), 'exec_time': execution_time, 'time': str(datetime.datetime.now()),
'successful': test_result.wasSuccessful(), 'iter': iteration + 1})
log.close()

# Read and parse the log containing the test runs
runs = []
with open(log_dir + 'test_runs.log') as log:
# Read and parse the log containing the test runs into an array for processing.
test_runs = []
with open('test_runs.log') as log:
reader = csv.DictReader(log)
for row in reader:
runs.append([datetime.datetime.strptime(row["start_time"], "%Y-%m-%d %H:%M:%S.%f"),
datetime.datetime.strptime(row["stop_time"], "%Y-%m-%d %H:%M:%S.%f"),
row['test_name']])
test_runs.append([datetime.datetime.strptime(row["start_time"], "%Y-%m-%d %H:%M:%S.%f"),
datetime.datetime.strptime(row["stop_time"], "%Y-%m-%d %H:%M:%S.%f"),
row['test_name']])

# Read and parse the log containing the endpoint hits
hits = []
with open(log_dir + 'endpoint_hits.log') as log:
# Read and parse the log containing the endpoint hits into an array for processing.
endpoint_hits = []
with open('endpoint_hits.log') as log:
reader = csv.DictReader(log)
for row in reader:
hits.append([datetime.datetime.strptime(row["time"], "%Y-%m-%d %H:%M:%S.%f"),
row['endpoint']])
endpoint_hits.append([datetime.datetime.strptime(row["time"], "%Y-%m-%d %H:%M:%S.%f"),
row['endpoint']])

# Analyze logs to find out which endpoints are hit by which unit tests
for h in hits:
for r in runs:
if r[0] <= h[0] <= r[1]:
if {'endpoint': h[1], 'test_name': r[2]} not in data['grouped_tests']:
data['grouped_tests'].append({'endpoint': h[1], 'test_name': r[2]})
# Analyze the two arrays to find out which endpoints were hit by which unit tests.
# Add the endpoint_name/test_name combination to the result dictionary.
for endpoint_hit in endpoint_hits:
for test_run in test_runs:
if test_run[0] <= endpoint_hit[0] <= test_run[1]:
if {'endpoint': endpoint_hit[1], 'test_name': test_run[2]} not in data['grouped_tests']:
data['grouped_tests'].append({'endpoint': endpoint_hit[1], 'test_name': test_run[2]})
break

# Try to send test results and endpoint-grouped unit tests to the flask_monitoringdashboard
if url:
# Send test results and endpoint_name/test_name combinations to the Dashboard if specified.
if args.url:
if args.url[-1] == '/':
args.url += 'submit-test-results'
else:
args.url += '/submit-test-results'
try:
requests.post(url, json=data)
print('Sent unit test results to the dashboard.')
requests.post(args.url, json=data)
print('Sent unit test results to the Dashboard at ', args.url)
except Exception as e:
print('Sending unit test results to the dashboard failed:\n{}'.format(e))
7 changes: 7 additions & 0 deletions flask_monitoringdashboard/core/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
"""
Core files for the Flask Monitoring Dashboard
- auth.py handles authentication
- forms.py are used for generating WTF_Forms
- measurements.py contains a number of wrappers
- outlier.py contains outlier information
"""
File renamed without changes.
File renamed without changes.
Loading

0 comments on commit 0ce6ac8

Please sign in to comment.