Skip to content

Commit

Permalink
Documentation adjustment.
Browse files Browse the repository at this point in the history
  • Loading branch information
jairbj committed Oct 23, 2016
1 parent b89e036 commit 8775671
Show file tree
Hide file tree
Showing 2 changed files with 53 additions and 55 deletions.
78 changes: 38 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,33 @@
# HOOTSUITE Challenge #

## Challenge ##
[challenge.md]()
[challenge.md](https://github.com/jairbj/WebhookChallenge/blob/master/challenge.md)


## What I did ##
I made a small project to simulate a webhook that receives messages and forward to previously defined destinations.
It was developed in PHP/Symfony3.
The project was made based in REST concepts and it answers the GET, POST, DELETE, PUT and PATCH requisitions.
The endpoints where tested using PHP Unit.
All requisitions to the webhook service are answered and should be made in JSON format.

I made a small project to simulate a webhook that receives messages and forward to previously defined destinations.
It was developed in PHP/Symfony3.
The project was made based in REST concepts and it answers the GET, POST, DELETE, PUT and PATCH requisitions.
The endpoints where tested using PHP Unit.
All requisitions to the webhook service are answered and should be made in JSON format.
## Considerations about security ##
Due to the short time, I didn’t implement neither validation to requests nor to sign the messages sent. Either I didn’t require HTTPS urls but it should be mandatory in a production environment.
Of course, in a production environment I’d use or RSA or ECC signature. I wouldn’t like to use HMAC as it’s use symmetric keys we need another step to ensure the keys are transferred in a secured way to the other side of endpoint.
Another option is authenticating requests to webhook based in credentials using JSON Web Tokens. The token can be transferred both in a “plain” way over HTTPS or either over a OAUTH layer of security.
The service also verifies if a valid URL was given and it won’t send messages to URLs that resolves to a private address.
Due to the short time, I didn’t implement neither validation to requests nor to sign the messages sent. Either I didn’t require HTTPS urls but it should be mandatory in a production environment.
Of course, in a production environment I’d use or RSA or ECC signature. I wouldn’t like to use HMAC as it’s use symmetric keys we need another step to ensure the keys are transferred in a secured way to the other side of endpoint.
Another option is authenticating requests to webhook based in credentials using JSON Web Tokens. The token can be transferred both in a “plain” way over HTTPS or either over a OAUTH layer of security.
The service also verifies if a valid URL was given and it won’t send messages to URLs that resolves to a private address.

## Considerations about scalability ##
Instead this project that relies in a centralized MySql the server can’t handle millions of connections, but it was developed with scalability in mind.
When a message is posted to the webhook it is added to a queue based on the destination.
The project has a worker module that consumes and process these messages and you can run multiple workers, each one processing one queue (one destination). The message ordering for each destination is guaranteed. You can also run a single worker to process all queues.
Each work can run in an independent server but it needs to connect to the same database server.
As it’s a small project (proof of concept only) I didn’t added validation to ensure you started only one worker for each queue. If you add more than one, the message ordering will not be guaranteed.
In another word, this project can scale as soon you have multiple destinations.
In a production environment, I’d use probably a queue server like RabbitMQ.
Instead this project that relies in a centralized MySql the server can’t handle millions of connections, but it was developed with scalability in mind.
When a message is posted to the webhook it is added to a queue based on the destination.
The project has a worker module that consumes and process these messages and you can run multiple workers, each one processing one queue (one destination). The message ordering for each destination is guaranteed. You can also run a single worker to process all queues.
Each work can run in an independent server but it needs to connect to the same database server.
As it’s a small project (proof of concept only) I didn’t added validation to ensure you started only one worker for each queue. If you add more than one, the message ordering will not be guaranteed.
In another word, this project can scale as soon you have multiple destinations.
In a production environment, I’d use probably a queue server like RabbitMQ.

## First ##
Download composer and install the required dependencies with
Download composer and install the required dependencies with

php composer.phar install

Expand All @@ -39,42 +39,40 @@ Download composer and install the required dependencies with
The server will start listening in `http://127.0.0.1:8000`

## PHPUnit ##
There are PHPUnit tests in the `./tests` folder.
You can run those with the command: `./vendor/bin/phpunit`
Attention: Run the tests will erase the database.
There are PHPUnit tests in the `./tests` folder.
You can run those with the command: `./vendor/bin/phpunit`
Attention: Run the tests will erase the database.

## Webhook Requisitions (documentation) ##
The webhook documentation are located in both `doc.md` and `doc.html` files.
After start the server you can also read the documentation in `http://127.0.0.1:8000/doc`.
**All requests and responses should be made in JSON format.**
The webhook documentation are located in both [doc.md](https://github.com/jairbj/WebhookChallenge/blob/master/doc.md) and [doc.html](https://github.com/jairbj/WebhookChallenge/blob/master/doc.html) files.
After start the server you can also read the documentation in `http://127.0.0.1:8000/doc`.
**All requests and responses should be made in JSON format.**

## Starting the message processor ##
The message processor can be started with the command
## Starting the message processor ##
The message processor can be started with the command

php ./bin/console message-processor
The service will start, process the messages and exit.
The service will start, process the messages and exit.
### Options: ###

--persistent
If set, the service will run in a persistent mode, it won’t exit until it’s cancelled.
If set, the service will run in a persistent mode, it won’t exit until it’s cancelled.

--destination=DESTINATION
Indicates that the service should only process messages from the DESTINATION (destination id) queue. If this option isn’t set, the service will process messages from all queues.
Indicates that the service should only process messages from the DESTINATION (destination id) queue. If this option isn’t set, the service will process messages from all queues.

--retry=RETRY
Indicates how many times (RETRY) the service will retry to deliver a message (in case of error) before removes it. Default = 3.
Indicates how many times (RETRY) the service will retry to deliver a message (in case of error) before removes it. Default = 3.

--retry-delay=RETRY-DELAY
Indicates how many time, in seconds (RETRY-DELAY), the service should wait before retry to deliver a message (in case of error). Default = 1.
Indicates how many time, in seconds (RETRY-DELAY), the service should wait before retry to deliver a message (in case of error). Default = 1.

**Attention:** The message processor will remove automatically messages that aren’t delivered for more than 24h. 
**Attention:** The message processor will remove automatically messages that aren’t delivered for more than 24h. 

## Extra information ##
When you run the Symfony built in server, it automatically set environment as DEVELOPMENT, so in case of error it’ll return the full debug stack to the client. It doesn’t happen if environment is set to PRODUCTION.
As I didn’t use a queue server I decided to add GET and DELETE method to the “messages” endpoint, so we can check and eventually remove messages from the queue. The messages will remain in the queue only before they are processed.
If you remove a destination, it’ll automatically removes all the messages to this destination that hasn’t been already processed.
I really love backend programming and I really would like to be part of HootSuite team.
For me programming isn’t a job, is a pleasure.


When you run the Symfony built in server, it automatically set environment as DEVELOPMENT, so in case of error it’ll return the full debug stack to the client. It doesn’t happen if environment is set to PRODUCTION.
As I didn’t use a queue server I decided to add GET and DELETE method to the “messages” endpoint, so we can check and eventually remove messages from the queue. The messages will remain in the queue only before they are processed.
If you remove a destination, it’ll automatically removes all the messages to this destination that hasn’t been already processed.
I really love backend programming and I really would like to be part of HootSuite team.
For me programming isn’t a job, is a pleasure.
30 changes: 15 additions & 15 deletions challenge.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,25 @@
# From HootSuite #

As more and more integration takes place between SaaS providers, other SaaS providers and their customers, webhooks have become an invaluable way of sharing events. These events simplify data-synchronization and extensibility.
As more and more integration takes place between SaaS providers, other SaaS providers and their customers, webhooks have become an invaluable way of sharing events. These events simplify data-synchronization and extensibility.

**Project:**
Write a webhook calling service that will reliably POST data to destination URLs in the order POST message requests are received.

The service should support the following remote requests via REST
register a new destination (URL) returning its id
list registered destinations [{id, URL},...]
delete a destination by id
POST a message to this destination (id, msg-body, content-type): this causes the server to POST the given msg-body to the URL associated with that id.
The service should support the following remote requests via REST
register a new destination (URL) returning its id
list registered destinations [{id, URL},...]
delete a destination by id
POST a message to this destination (id, msg-body, content-type): this causes the server to POST the given msg-body to the URL associated with that id.

**Behaviour:**
If the destination URL is not responding (e.g. the servier is down) or returns a non-200 response, your service should resend the message at a later time
Messages not sent within 24 hours can be be deleted
Messages that failed to send should retried 3 or more times before they are deleted
Message ordering to a destination should be preserved, even when there are pending message retries for that destination
Feel free to add more metadata to the destination (id, URL,) if it helps your implementation
If the destination URL is not responding (e.g. the servier is down) or returns a non-200 response, your service should resend the message at a later time
Messages not sent within 24 hours can be be deleted
Messages that failed to send should retried 3 or more times before they are deleted
Message ordering to a destination should be preserved, even when there are pending message retries for that destination
Feel free to add more metadata to the destination (id, URL,) if it helps your implementation

**To Consider:**
is your API using the standard REST-ful conventions for the 4 operations?
how can I scale out this service across multiple servers while preserving per-destination ordering?
how well does your service support concurrency for multiple destinations while preserving per-destination ordering?
how secure is this? should you require HTTPS urls? should the content be signed with something like an HMAC? Should any url be allowed (e.g. one that has or resolves to a private IP address?)
is your API using the standard REST-ful conventions for the 4 operations?
how can I scale out this service across multiple servers while preserving per-destination ordering?
how well does your service support concurrency for multiple destinations while preserving per-destination ordering?
how secure is this? should you require HTTPS urls? should the content be signed with something like an HMAC? Should any url be allowed (e.g. one that has or resolves to a private IP address?)

0 comments on commit 8775671

Please sign in to comment.