From 663af692afe1428b4a95923624e2db11b2c46ccd Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Mon, 25 May 2020 10:36:13 -0700 Subject: [PATCH 01/12] Create proposal.md --- .../Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md | 1 + 1 file changed, 1 insertion(+) create mode 100644 Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md diff --git a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md new file mode 100644 index 0000000..8b13789 --- /dev/null +++ b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md @@ -0,0 +1 @@ + From bc7a7e47a5d6af3149bdc83ededd4a80015c7ace Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Mon, 25 May 2020 11:02:50 -0700 Subject: [PATCH 02/12] Create Web-Scraping-Current-Stock-Price --- Web-Scraping-Current-Stock-Price | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) create mode 100644 Web-Scraping-Current-Stock-Price diff --git a/Web-Scraping-Current-Stock-Price b/Web-Scraping-Current-Stock-Price new file mode 100644 index 0000000..8b5b030 --- /dev/null +++ b/Web-Scraping-Current-Stock-Price @@ -0,0 +1,32 @@ +# TEMPLATE + +## :fire: Do not edit this file - copy the template and create your own file. + +**[Step-By-Step Technical Blog Guide](https://hq.bitproject.org/how-to-write-a-technical-blog/)** + +### :pushpin: Step 1 +**TITLE:** +Web-Scraping-Current-Stock-Price + +**TOPIC:** +Machine Learning + +**DESCRIPTION (5-7+ sentences):** +The topic that I decided to focus on is web scraping of stock prices. This means that whenever a stock price changes, the program will have the functionality of outputting that information. The flexibility and simplicity of this program that I implemented is evident of how easy web scraping is to learn and how accessible it is to anyone that has a computer and an IDE. Anyone can learn how to web scrape and this is at its most basic form. Web scraping has been something that I've been wanting to learn for awhile now, so this was the perfect opportunity for me to learn and educate others on what I learned throughout this process. + +### :pushpin: Step 2 +:family: **TARGET AUDIENCE (3-5+ sentences):** +Beginning Programmers with minimal knowledge in coding + +### :pushpin: Step 3 +> Outline your learning/teaching structure: +My teaching structure is straight to the point and step by step. I don't want to bore those that don't want to read a full essay about a simple topic. Especially in this generation, people hate reading, so I think including images and getting straight to the point will be quickest and easiest to learn. + +**Beginning (2-3+ sentences):** +I will discuss about my motive for creating the blog. Out of all the ideas that I could've chosen, why did I choose this one? Unrelated to my motive, I will transition into talking about the modules necessary to be imported and installed on your operating system for this program to work. + +**Middle (2-3+ sentences):** +I will discuss about how to actually implement the program. Each code will be explained concisely. I have about 10-15 lines of code intended to be explained, so I don't want to include paragraphs about each line of code when some of it is self explanatory. + +**End (2-3+ sentences):** +Lastly, I will talk about how this topic relates to the real world and why it is relevant in the society that we live in today. I will also provide screenshots of what the output looks like and provide an ending statement to encourage others to pursue more web scraping! From 6e9bb5024213379092a982cb4fbb23826c456c20 Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Mon, 25 May 2020 11:22:48 -0700 Subject: [PATCH 03/12] Create blog.md --- .../Web-Scraping-Current-Stock-Price/blog.md | 71 +++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/blog.md diff --git a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/blog.md b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/blog.md new file mode 100644 index 0000000..f87988e --- /dev/null +++ b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/blog.md @@ -0,0 +1,71 @@ +# Why Web Scraping? + +Everyone wants to make money in the society that we live in today. More and more college students want to become engineers, computer scientists, and doctors as it secures a stable income despite its difficulties. However, what if I told you that you do not have to do this to become successful? As an investor myself, I decided to learn about stocks at a very young age. When I first started, I would try to look for the perfect brokerage. One of the important observations that I noticed was that many brokerages were late on updating the price of stocks, which would eventually lead you to losing money. Obviously, I don't want that to happen to you. So look no further! I’ve created a real time stock price scraper using Python that allows you to keep up with the current price of the stock of your interest. + +## Installation of Important Modules + +Before we begin, there are a few things we need to install. Navigate to your command line and type the following individually. + +```console +pip install requests +pip install beautifulsoup +``` + +With that installed on your computer, we can then import the modules necessary for this program to work. + +``` python +import bs4 +import requests +from bs4 import BeautifulSoup +``` + +Keeping it concise, the bs4 module is a powerful library that allows us to access web pages, APIs, post to forms, and much more operations. The requests module allows you to send HTTP requests. Lastly, the Beautiful Soup (BS4) module sounds like a silly module but in reality it is a parsing library useful for extracting data from HTML / XML documents. + +## Implementation of Program + +For the website that I decided to web scrape, I chose a simple one *https://finance.yahoo.com/quote/FB?p=FB*. With this url, I was able to use the requests module to access all of the response data. + +```python +url = requests.get('https://finance.yahoo.com/quote/FB?p=FB') +``` + +If you navigate to the link, you can see that we are interested in the Facebook stock. + +Currently, the price of one stock of Facebook is 234.91. This may or may not be different for you so it is perfectly fine if it is! + + +Now we have to utilize the BS4 as well as the BeautifulSoup module, which allows us to extract data from HTML. + +```python +soup = bs4.BeautifulSoup(url.text, features="html.parser") +``` + + +We're not done yet! + +As you may know, a website consists of a plethora of information. From a programmers' perspective, if you tried to understand each and every function contained within the HTML/CSS tag, it would be virtually impossible. However, we can grab just the information that we need, which is the stock price (234.91 when blog was created). + +```python +price = soup.find_all("div", {'class': 'My(6px) Pos(r) smartphone_Mt(6px)'})[0].find('span').text +``` + + +Now we have all the information that we need to complete this function, so we can just return the price. + +```python +return price +``` + +To sum up, the purpose of this program is to constantly be able to know the current price of the stock that we are interested in, so we don't want sell our stock at a lower price than we intend. Therefore, we can just put the implementation above into a while loop. + +```python +while True: + print("The current price: " + str(parsePrice())) +``` + +## Conclusion + +That's it! We've fully implemented a real time stock price scraper using Python. You no longer have to go through the trouble of looking up the price of a stock. You can simply run this program! Here's how our output should look like. + +![](https://i.imgur.com/3xBFRuj.png) + From 7199ffa7c5fcb7400bd9151eeb312e9068ad43ca Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Mon, 25 May 2020 11:23:08 -0700 Subject: [PATCH 04/12] Delete proposal.md --- .../Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md | 1 - 1 file changed, 1 deletion(-) delete mode 100644 Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md diff --git a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md deleted file mode 100644 index 8b13789..0000000 --- a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md +++ /dev/null @@ -1 +0,0 @@ - From 172c90c2aa9ab33ba1a220eeb746ed4582f9579f Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Mon, 25 May 2020 11:23:53 -0700 Subject: [PATCH 05/12] Delete Web-Scraping-Current-Stock-Price --- Web-Scraping-Current-Stock-Price | 32 -------------------------------- 1 file changed, 32 deletions(-) delete mode 100644 Web-Scraping-Current-Stock-Price diff --git a/Web-Scraping-Current-Stock-Price b/Web-Scraping-Current-Stock-Price deleted file mode 100644 index 8b5b030..0000000 --- a/Web-Scraping-Current-Stock-Price +++ /dev/null @@ -1,32 +0,0 @@ -# TEMPLATE - -## :fire: Do not edit this file - copy the template and create your own file. - -**[Step-By-Step Technical Blog Guide](https://hq.bitproject.org/how-to-write-a-technical-blog/)** - -### :pushpin: Step 1 -**TITLE:** -Web-Scraping-Current-Stock-Price - -**TOPIC:** -Machine Learning - -**DESCRIPTION (5-7+ sentences):** -The topic that I decided to focus on is web scraping of stock prices. This means that whenever a stock price changes, the program will have the functionality of outputting that information. The flexibility and simplicity of this program that I implemented is evident of how easy web scraping is to learn and how accessible it is to anyone that has a computer and an IDE. Anyone can learn how to web scrape and this is at its most basic form. Web scraping has been something that I've been wanting to learn for awhile now, so this was the perfect opportunity for me to learn and educate others on what I learned throughout this process. - -### :pushpin: Step 2 -:family: **TARGET AUDIENCE (3-5+ sentences):** -Beginning Programmers with minimal knowledge in coding - -### :pushpin: Step 3 -> Outline your learning/teaching structure: -My teaching structure is straight to the point and step by step. I don't want to bore those that don't want to read a full essay about a simple topic. Especially in this generation, people hate reading, so I think including images and getting straight to the point will be quickest and easiest to learn. - -**Beginning (2-3+ sentences):** -I will discuss about my motive for creating the blog. Out of all the ideas that I could've chosen, why did I choose this one? Unrelated to my motive, I will transition into talking about the modules necessary to be imported and installed on your operating system for this program to work. - -**Middle (2-3+ sentences):** -I will discuss about how to actually implement the program. Each code will be explained concisely. I have about 10-15 lines of code intended to be explained, so I don't want to include paragraphs about each line of code when some of it is self explanatory. - -**End (2-3+ sentences):** -Lastly, I will talk about how this topic relates to the real world and why it is relevant in the society that we live in today. I will also provide screenshots of what the output looks like and provide an ending statement to encourage others to pursue more web scraping! From 594fff0256623472f4ad7fd6f6f275a2ae588a9f Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Mon, 25 May 2020 11:24:16 -0700 Subject: [PATCH 06/12] Create proposal.md --- .../proposal.md | 32 +++++++++++++++++++ 1 file changed, 32 insertions(+) create mode 100644 Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md diff --git a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md new file mode 100644 index 0000000..8b5b030 --- /dev/null +++ b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md @@ -0,0 +1,32 @@ +# TEMPLATE + +## :fire: Do not edit this file - copy the template and create your own file. + +**[Step-By-Step Technical Blog Guide](https://hq.bitproject.org/how-to-write-a-technical-blog/)** + +### :pushpin: Step 1 +**TITLE:** +Web-Scraping-Current-Stock-Price + +**TOPIC:** +Machine Learning + +**DESCRIPTION (5-7+ sentences):** +The topic that I decided to focus on is web scraping of stock prices. This means that whenever a stock price changes, the program will have the functionality of outputting that information. The flexibility and simplicity of this program that I implemented is evident of how easy web scraping is to learn and how accessible it is to anyone that has a computer and an IDE. Anyone can learn how to web scrape and this is at its most basic form. Web scraping has been something that I've been wanting to learn for awhile now, so this was the perfect opportunity for me to learn and educate others on what I learned throughout this process. + +### :pushpin: Step 2 +:family: **TARGET AUDIENCE (3-5+ sentences):** +Beginning Programmers with minimal knowledge in coding + +### :pushpin: Step 3 +> Outline your learning/teaching structure: +My teaching structure is straight to the point and step by step. I don't want to bore those that don't want to read a full essay about a simple topic. Especially in this generation, people hate reading, so I think including images and getting straight to the point will be quickest and easiest to learn. + +**Beginning (2-3+ sentences):** +I will discuss about my motive for creating the blog. Out of all the ideas that I could've chosen, why did I choose this one? Unrelated to my motive, I will transition into talking about the modules necessary to be imported and installed on your operating system for this program to work. + +**Middle (2-3+ sentences):** +I will discuss about how to actually implement the program. Each code will be explained concisely. I have about 10-15 lines of code intended to be explained, so I don't want to include paragraphs about each line of code when some of it is self explanatory. + +**End (2-3+ sentences):** +Lastly, I will talk about how this topic relates to the real world and why it is relevant in the society that we live in today. I will also provide screenshots of what the output looks like and provide an ending statement to encourage others to pursue more web scraping! From be3108a70d5a0249e1811e8f09e6ff5c4cdc46fb Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Mon, 25 May 2020 16:52:22 -0700 Subject: [PATCH 07/12] Update proposal.md --- .../Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md index 8b5b030..11ae53d 100644 --- a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md +++ b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md @@ -30,3 +30,6 @@ I will discuss about how to actually implement the program. Each code will be ex **End (2-3+ sentences):** Lastly, I will talk about how this topic relates to the real world and why it is relevant in the society that we live in today. I will also provide screenshots of what the output looks like and provide an ending statement to encourage others to pursue more web scraping! + +** Loom Video Link: ** +https://www.loom.com/share/edfaca5e142048e688d29bfd31972943 From 14ebe5007bc36b088e043115c42bffd8171d3b92 Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Sun, 31 May 2020 21:26:20 -0700 Subject: [PATCH 08/12] Create proposal.md --- .../proposal.md | 36 +++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md diff --git a/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md b/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md new file mode 100644 index 0000000..aab3434 --- /dev/null +++ b/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md @@ -0,0 +1,36 @@ +# TEMPLATE + +## :fire: Do not edit this file - copy the template and create your own file. + +**[Step-By-Step Technical Blog Guide](https://hq.bitproject.org/how-to-write-a-technical-blog/)** + +### :pushpin: Step 1 +**TITLE:** +Navigating schemas/types/fetching requests, using mutations/subscriptions + +**TOPIC:** +GraphQL + +**DESCRIPTION (5-7+ sentences):** + +GraphQL, a language for APIs, was first developed by Facebook. After it was introduced, it became extremely popular as some considered it a better API than rest API because underfetching and overfetching data no longer became a problem. GraphQL allows us to specify the root field and the payload from the client side, giving greater flexibility to manipulate the data we want to request with a single endpoint. Furthermore, GraphQL has special features of allowing the client to update data, commonly known as mutations, as well as receive real-time data, the concept of subscriptions. These are the core concepts I will be covering within my blog. + +### :pushpin: Step 2 +:family: **TARGET AUDIENCE (3-5+ sentences):** + +My target audience are beginning coders who have little to no experience with GraphQL. I have quite a bit of experience with coding but I currently do not have much knowledge with GraphQL. My goal is to help others get to the level that I will be in after finishing this blog. All in all, this blog will be concise and easy to learn for students who have minimal experience with coding. + +### :pushpin: Step 3 +> Outline your learning/teaching structure: + +**Beginning (2-3+ sentences):** + +As instructed, our blog is supposed to be catered towards people with little to no experience with GraphQL, therefore I will teach my topic in a format that presents a lot of examples. I will begin with giving background information about what GraphQL is: what is it, why is it important, and why do we use it? + +**Middle (2-3+ sentences):** + +Once the topic of GraphQL is covered, I will transition to what mutations and subscriptions are. I am unfamiliar with how to create a client and a server, but once I learn how to do so, I will present code deliverables with images of how to fetch requests related to mutations and subscriptions from the server and client side. + +**End (2-3+ sentences):** + +When the code deliverables are complete, I will explain the next steps that we can take to go further in what we have learned in this blog. I want to encourage others to continue learning GraphQL and work on personal projects of their own to further their knowledge in this topic. From c816c1b63b554a99f15092b34004e5d064ff8d01 Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Sun, 31 May 2020 21:26:58 -0700 Subject: [PATCH 09/12] Update proposal.md --- .../proposal.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md b/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md index aab3434..ca2b7cb 100644 --- a/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md +++ b/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md @@ -13,7 +13,7 @@ GraphQL **DESCRIPTION (5-7+ sentences):** -GraphQL, a language for APIs, was first developed by Facebook. After it was introduced, it became extremely popular as some considered it a better API than rest API because underfetching and overfetching data no longer became a problem. GraphQL allows us to specify the root field and the payload from the client side, giving greater flexibility to manipulate the data we want to request with a single endpoint. Furthermore, GraphQL has special features of allowing the client to update data, commonly known as mutations, as well as receive real-time data, the concept of subscriptions. These are the core concepts I will be covering within my blog. +GraphQL, a language for APIs, was first developed by Facebook. After it was introduced, it became extremely popular as some considered it a better API than rest API because underfetching and overfetching data no longer became a problem. GraphQL allows us to specify the root field and the payload from the client side, giving greater flexibility to manipulate the data we want to request from the server. Furthermore, GraphQL has special features of allowing the client to update data, commonly known as mutations, as well as receive real-time data, the concept of subscriptions. These are the core concepts I will be covering within my blog. ### :pushpin: Step 2 :family: **TARGET AUDIENCE (3-5+ sentences):** From 960c7beda4cf066fa348d091e998ac90af01e45f Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Sat, 27 Jun 2020 15:08:16 -0700 Subject: [PATCH 10/12] Create blog.md --- .../blog.md | 186 ++++++++++++++++++ 1 file changed, 186 insertions(+) create mode 100644 GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md diff --git a/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md b/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md new file mode 100644 index 0000000..935a8a0 --- /dev/null +++ b/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md @@ -0,0 +1,186 @@ +# Navigating Schemas, Types, Fetching Requests, using Mutations & Subscriptions + +Do you ever wonder how social media services like Facebook or Instagram are able to interact with the user? Whenever you click on that like button on a post that your friend posted, how is it possible that a week after the like button is still there? This is achieved through the usage of application programming interfaces, commonly known as APIs. In short, with the help of APIs, the user is able to deliver requests for information from the server which then the server sends a response back to the user. This is precisely why the like button is still present in the same post a week after -- because the database used by Facebook stores all of the information and we are able to access it through performing an action. You might think that APIs are only allowed to fetch existing data from the server, but there are various other features that are impressive in APIs including creating, updating, and deleting data. Developers choose to use the API of their preference, two of the most common APIs used being REST API and GraphQL. However, throughout this blog, I will be discussing the features and services of GraphQL as well as the reasoning behind why GraphQL is a better query language than the REST API. + +## Why is GraphQL better than the REST API? + +What makes GraphQL so unique is that it involves a single endpoint instead of multiple endpoints. This means that when you request data from the server, the user can specify the exact data needed in the payload, the body of the query, comparable to parameters of functions when coding. This feature from GraphQL prevents overfetching as well as underfetching data that is unnecessary for the user. The REST API uses multiple endpoints when retrieving data from a server, an extremely inefficient method for fetching data. For example, if a user sends a fetch/GET request to access the names of the user's first three followers on Instagram, the user would receive unnecessary information as well as having to send three separate requests to the server. As can be inferred, this procedure from the REST API is especially ineffective and involves overfetching unnecessary information. + +## Fetching Requests + +In order to fetch requests from the server, we need to download Postman, a software enabling users to test API calls. + +After fully installing Postman, + +
    +
  1. Click on the + sign to create a new request
  2. +
  3. Observe from *https://api.github.com/users/vdespa* that there is a lot of data stored in the server, for the syntax of GraphQL we can specify the field that we want to obtain information from, such as the name
  4. +
  5. We can use the REST API which will allow us to obtain all of the data through multiple endpoints but we’ll be demonstrating usage of GraphQL which has a single endpoint, allowing us to update, add, or delete data
  6. +
  7. We will be using the github API from the link *https://api.github.com/graphql* and because we are using GraphQL we have to change the "request" option to the "post" option
  8. +
  9. After changing the options, we will see an option of "GraphQL" pop up when we direct to the tab of Body
  10. +
  11. Because we are using Github's API, we have to click on "authorization" and provide a Bearer Token to get access to all of Github's API features
  12. +
  13. Create a new API by clicking “new API” on the side and adding a schema that I will attach in the description. Adding this schema will give us access to features related to schemas and formats that we are now allowed to use
  14. +
  15. Going back to our query, click refresh and observe that we have a new schema that we can use
  16. +
  17. We can finally implement our query to obtain data that we request from the client
  18. +
  19. We will request data from the login name "defunkt", which we can indicate in the parameter
  20. +
  21. Now, we need to request data from the login name "defunkt". For this guide, we will be requesting the name, bio, and twitter username
  22. +
  23. Adding this altogether yields:
  24. + +``` +{ + user(login: "defunkt") { + name + bio + twitterUsername + } +} + +``` + +
  25. Scrolling down to the bottom, we can see the data that we requested from the server output information back to the client
  26. + +``` +{ + user(login: "defunkt") { + name : "Chris" + bio: "emoji" + twitterUsername: NULL + } +} + +``` +
+ + +## Navigating Schemas & Types + +Schemas are a collection of types and the relationship between those different types. Supporting the clients' desired actions, schemas allow the user to see what data is available as well as allowing for specific requests of data wanted to be retrieved from the server to the client side. + +GraphQL defines the schema of an API using its own syntax known as Schema Definition Language (SDL). + +Here's a basic example of a schema defining a type called Song and Artist: + +``` +type Song { + title: String + artist: Artist +} + +type Artist { + name: String + songs: [Song] +} +``` +The Song type has two fields: title and artist and are respectively of type String and Artist. + +The Artist type has two fields: name and songsand are respectively of type String and [Song]. + +## Supported Types of Schemas from GraphQL + +GraphQL schema can be categorized into different types. + +In the earlier example, we displayed an object type: + +``` +type Song { + title: String + artist: Artist +} + +type Artist { + name: String + songs: [Song] +} +``` + +Object types can include another object type or a scalar type as fields. + +In this example, object types include the fields artist and songs. Scalar types include the fields title and name. + +**The 5 Scalar Types in GraphQL are:** + + • int : an integer value of 32-bit + • float : a floating-point value + • string : a character sequence + • boolean : true or false + • id : a unique human-unreadable identifier used to refetch an object + +The most common and simple types of schemas in GraphQL are object types. There are several other types of schemas covered in this blog such as mutations and subscriptions which are structured quite differently from object types. + +## The Mutation Type + +We've already learned how to fetch data from the server, but we need to learn a way to make changes to the data stored in the server. This is achieved through mutations. Mutations are classified into 3 different types: + +• Creating new data +• Updating existing data +• Deleting existing data + +As mentioned earlier, mutations have a unique structure that distinguishes their type with other types of schemas. + +Try to understand the structure of mutations in this following example when creating a Song object + +``` +mutation CreateSong { + addSong(title: "Hello", author: "Adele") + title + author { + name + } +} +``` + +Mutations start with the mutation keyword and specifies the data to be added or changed in the parameters of the root field addSong. After specifying the parameters, in our example they would be title and author, we can then specify the information that we want to request from the server in the payload, in our example they would be title, author, and name. + +Our server would respond to this call with a response matching the mutation's syntax, like this: + +``` +{ + "data": { + "addBook": { + "title": "Fox in Socks", + "author": { + "name": "Dr. Seuss" + } + } + } +} +``` + +To be clear, mutations are executed in its specified order. In a single request from the client, multiple mutations can be executed at once. + +## The Subscription Type + +The concept of the subscription type in GraphQL is that there is realtime connection to the server. This means that whenever changes are made to the server by a client, the host will be immediately informed about the changes made at that current time. + +Obviously this subscription feature is not accessible for everyone. The subscription features is only available to the clients that subscribe to an event. For example, when you are on Snapchat, you might subscribe to the news Snapchat channel to keep track of what is going on everyday. Subscriptions operate on a "request-response cycle", meaning that data is pushed directly from the server to the client. + + +The structure of subscriptions are similar to that of mutations. Observe from this example as the client subscribes to an event of the Book type. + +``` +subscription { + newBook { + title + author + } +} +``` + +When the client subscribes to the server, there is a connection opened between them. As a new mutation is executed creating a new Book type, the server will send the data about the book over to the client: + +``` +{ + "newBook": { + "title": "Green Eggs and Ham", + "author": "Dr.Seuss" + } +} +``` + +## The Big Picture + +GraphQL is pretty easy to understand! I think that people have the wrong perception of APIs and learning about APIs because they don't know where to start learning. At large, when you're learning a new concept, you should always start with small steps which will grow into larger steps. In this tutorial, we learned about schemas, the syntax for writing schemas, how to fetch data with the Postman software and object types, mutation types, and subscription types. In conclusion, query languages are becoming more widespread and used by software development by corporations like Facebook, Github, Pinterest, and many more, and it is a powerful tool that everyone can learn and apply themselves to APIs that are self-created or borrowed online. + + + + From b3ab8d5f358f27cb54598d022845973f54a08b4c Mon Sep 17 00:00:00 2001 From: Jason Hong <59253882+jhong00@users.noreply.github.com> Date: Sat, 27 Jun 2020 15:08:44 -0700 Subject: [PATCH 11/12] Update blog.md --- .../blog.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md b/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md index 935a8a0..ea63512 100644 --- a/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md +++ b/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md @@ -37,7 +37,7 @@ After fully installing Postman, ``` -
  • Scrolling down to the bottom, we can see the data that we requested from the server output information back to the client
  • +
  • Scrolling down to the bottom, we can see the data that we requested from the server output information back to the client:
  • ``` { From f00826724583f5a46b0d73bcd59dd83e43ec9ccd Mon Sep 17 00:00:00 2001 From: Matthew Edward Lee <63700477+cdhlee@users.noreply.github.com> Date: Sat, 11 Jul 2020 23:09:44 -0700 Subject: [PATCH 12/12] Revert "proposal.md" --- .../blog.md | 186 ------------------ .../proposal.md | 36 ---- .../Web-Scraping-Current-Stock-Price/blog.md | 71 ------- .../proposal.md | 35 ---- 4 files changed, 328 deletions(-) delete mode 100644 GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md delete mode 100644 GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md delete mode 100644 Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/blog.md delete mode 100644 Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md diff --git a/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md b/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md deleted file mode 100644 index ea63512..0000000 --- a/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/blog.md +++ /dev/null @@ -1,186 +0,0 @@ -# Navigating Schemas, Types, Fetching Requests, using Mutations & Subscriptions - -Do you ever wonder how social media services like Facebook or Instagram are able to interact with the user? Whenever you click on that like button on a post that your friend posted, how is it possible that a week after the like button is still there? This is achieved through the usage of application programming interfaces, commonly known as APIs. In short, with the help of APIs, the user is able to deliver requests for information from the server which then the server sends a response back to the user. This is precisely why the like button is still present in the same post a week after -- because the database used by Facebook stores all of the information and we are able to access it through performing an action. You might think that APIs are only allowed to fetch existing data from the server, but there are various other features that are impressive in APIs including creating, updating, and deleting data. Developers choose to use the API of their preference, two of the most common APIs used being REST API and GraphQL. However, throughout this blog, I will be discussing the features and services of GraphQL as well as the reasoning behind why GraphQL is a better query language than the REST API. - -## Why is GraphQL better than the REST API? - -What makes GraphQL so unique is that it involves a single endpoint instead of multiple endpoints. This means that when you request data from the server, the user can specify the exact data needed in the payload, the body of the query, comparable to parameters of functions when coding. This feature from GraphQL prevents overfetching as well as underfetching data that is unnecessary for the user. The REST API uses multiple endpoints when retrieving data from a server, an extremely inefficient method for fetching data. For example, if a user sends a fetch/GET request to access the names of the user's first three followers on Instagram, the user would receive unnecessary information as well as having to send three separate requests to the server. As can be inferred, this procedure from the REST API is especially ineffective and involves overfetching unnecessary information. - -## Fetching Requests - -In order to fetch requests from the server, we need to download Postman, a software enabling users to test API calls. - -After fully installing Postman, - -
      -
    1. Click on the + sign to create a new request
    2. -
    3. Observe from *https://api.github.com/users/vdespa* that there is a lot of data stored in the server, for the syntax of GraphQL we can specify the field that we want to obtain information from, such as the name
    4. -
    5. We can use the REST API which will allow us to obtain all of the data through multiple endpoints but we’ll be demonstrating usage of GraphQL which has a single endpoint, allowing us to update, add, or delete data
    6. -
    7. We will be using the github API from the link *https://api.github.com/graphql* and because we are using GraphQL we have to change the "request" option to the "post" option
    8. -
    9. After changing the options, we will see an option of "GraphQL" pop up when we direct to the tab of Body
    10. -
    11. Because we are using Github's API, we have to click on "authorization" and provide a Bearer Token to get access to all of Github's API features
    12. -
    13. Create a new API by clicking “new API” on the side and adding a schema that I will attach in the description. Adding this schema will give us access to features related to schemas and formats that we are now allowed to use
    14. -
    15. Going back to our query, click refresh and observe that we have a new schema that we can use
    16. -
    17. We can finally implement our query to obtain data that we request from the client
    18. -
    19. We will request data from the login name "defunkt", which we can indicate in the parameter
    20. -
    21. Now, we need to request data from the login name "defunkt". For this guide, we will be requesting the name, bio, and twitter username
    22. -
    23. Adding this altogether yields:
    24. - -``` -{ - user(login: "defunkt") { - name - bio - twitterUsername - } -} - -``` - -
    25. Scrolling down to the bottom, we can see the data that we requested from the server output information back to the client:
    26. - -``` -{ - user(login: "defunkt") { - name : "Chris" - bio: "emoji" - twitterUsername: NULL - } -} - -``` -
    - - -## Navigating Schemas & Types - -Schemas are a collection of types and the relationship between those different types. Supporting the clients' desired actions, schemas allow the user to see what data is available as well as allowing for specific requests of data wanted to be retrieved from the server to the client side. - -GraphQL defines the schema of an API using its own syntax known as Schema Definition Language (SDL). - -Here's a basic example of a schema defining a type called Song and Artist: - -``` -type Song { - title: String - artist: Artist -} - -type Artist { - name: String - songs: [Song] -} -``` -The Song type has two fields: title and artist and are respectively of type String and Artist. - -The Artist type has two fields: name and songsand are respectively of type String and [Song]. - -## Supported Types of Schemas from GraphQL - -GraphQL schema can be categorized into different types. - -In the earlier example, we displayed an object type: - -``` -type Song { - title: String - artist: Artist -} - -type Artist { - name: String - songs: [Song] -} -``` - -Object types can include another object type or a scalar type as fields. - -In this example, object types include the fields artist and songs. Scalar types include the fields title and name. - -**The 5 Scalar Types in GraphQL are:** - - • int : an integer value of 32-bit - • float : a floating-point value - • string : a character sequence - • boolean : true or false - • id : a unique human-unreadable identifier used to refetch an object - -The most common and simple types of schemas in GraphQL are object types. There are several other types of schemas covered in this blog such as mutations and subscriptions which are structured quite differently from object types. - -## The Mutation Type - -We've already learned how to fetch data from the server, but we need to learn a way to make changes to the data stored in the server. This is achieved through mutations. Mutations are classified into 3 different types: - -• Creating new data -• Updating existing data -• Deleting existing data - -As mentioned earlier, mutations have a unique structure that distinguishes their type with other types of schemas. - -Try to understand the structure of mutations in this following example when creating a Song object - -``` -mutation CreateSong { - addSong(title: "Hello", author: "Adele") - title - author { - name - } -} -``` - -Mutations start with the mutation keyword and specifies the data to be added or changed in the parameters of the root field addSong. After specifying the parameters, in our example they would be title and author, we can then specify the information that we want to request from the server in the payload, in our example they would be title, author, and name. - -Our server would respond to this call with a response matching the mutation's syntax, like this: - -``` -{ - "data": { - "addBook": { - "title": "Fox in Socks", - "author": { - "name": "Dr. Seuss" - } - } - } -} -``` - -To be clear, mutations are executed in its specified order. In a single request from the client, multiple mutations can be executed at once. - -## The Subscription Type - -The concept of the subscription type in GraphQL is that there is realtime connection to the server. This means that whenever changes are made to the server by a client, the host will be immediately informed about the changes made at that current time. - -Obviously this subscription feature is not accessible for everyone. The subscription features is only available to the clients that subscribe to an event. For example, when you are on Snapchat, you might subscribe to the news Snapchat channel to keep track of what is going on everyday. Subscriptions operate on a "request-response cycle", meaning that data is pushed directly from the server to the client. - - -The structure of subscriptions are similar to that of mutations. Observe from this example as the client subscribes to an event of the Book type. - -``` -subscription { - newBook { - title - author - } -} -``` - -When the client subscribes to the server, there is a connection opened between them. As a new mutation is executed creating a new Book type, the server will send the data about the book over to the client: - -``` -{ - "newBook": { - "title": "Green Eggs and Ham", - "author": "Dr.Seuss" - } -} -``` - -## The Big Picture - -GraphQL is pretty easy to understand! I think that people have the wrong perception of APIs and learning about APIs because they don't know where to start learning. At large, when you're learning a new concept, you should always start with small steps which will grow into larger steps. In this tutorial, we learned about schemas, the syntax for writing schemas, how to fetch data with the Postman software and object types, mutation types, and subscription types. In conclusion, query languages are becoming more widespread and used by software development by corporations like Facebook, Github, Pinterest, and many more, and it is a powerful tool that everyone can learn and apply themselves to APIs that are self-created or borrowed online. - - - - diff --git a/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md b/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md deleted file mode 100644 index ca2b7cb..0000000 --- a/GraphQL/Navigating schemas, types, fetching requests, using mutations & subscriptions/proposal.md +++ /dev/null @@ -1,36 +0,0 @@ -# TEMPLATE - -## :fire: Do not edit this file - copy the template and create your own file. - -**[Step-By-Step Technical Blog Guide](https://hq.bitproject.org/how-to-write-a-technical-blog/)** - -### :pushpin: Step 1 -**TITLE:** -Navigating schemas/types/fetching requests, using mutations/subscriptions - -**TOPIC:** -GraphQL - -**DESCRIPTION (5-7+ sentences):** - -GraphQL, a language for APIs, was first developed by Facebook. After it was introduced, it became extremely popular as some considered it a better API than rest API because underfetching and overfetching data no longer became a problem. GraphQL allows us to specify the root field and the payload from the client side, giving greater flexibility to manipulate the data we want to request from the server. Furthermore, GraphQL has special features of allowing the client to update data, commonly known as mutations, as well as receive real-time data, the concept of subscriptions. These are the core concepts I will be covering within my blog. - -### :pushpin: Step 2 -:family: **TARGET AUDIENCE (3-5+ sentences):** - -My target audience are beginning coders who have little to no experience with GraphQL. I have quite a bit of experience with coding but I currently do not have much knowledge with GraphQL. My goal is to help others get to the level that I will be in after finishing this blog. All in all, this blog will be concise and easy to learn for students who have minimal experience with coding. - -### :pushpin: Step 3 -> Outline your learning/teaching structure: - -**Beginning (2-3+ sentences):** - -As instructed, our blog is supposed to be catered towards people with little to no experience with GraphQL, therefore I will teach my topic in a format that presents a lot of examples. I will begin with giving background information about what GraphQL is: what is it, why is it important, and why do we use it? - -**Middle (2-3+ sentences):** - -Once the topic of GraphQL is covered, I will transition to what mutations and subscriptions are. I am unfamiliar with how to create a client and a server, but once I learn how to do so, I will present code deliverables with images of how to fetch requests related to mutations and subscriptions from the server and client side. - -**End (2-3+ sentences):** - -When the code deliverables are complete, I will explain the next steps that we can take to go further in what we have learned in this blog. I want to encourage others to continue learning GraphQL and work on personal projects of their own to further their knowledge in this topic. diff --git a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/blog.md b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/blog.md deleted file mode 100644 index f87988e..0000000 --- a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/blog.md +++ /dev/null @@ -1,71 +0,0 @@ -# Why Web Scraping? - -Everyone wants to make money in the society that we live in today. More and more college students want to become engineers, computer scientists, and doctors as it secures a stable income despite its difficulties. However, what if I told you that you do not have to do this to become successful? As an investor myself, I decided to learn about stocks at a very young age. When I first started, I would try to look for the perfect brokerage. One of the important observations that I noticed was that many brokerages were late on updating the price of stocks, which would eventually lead you to losing money. Obviously, I don't want that to happen to you. So look no further! I’ve created a real time stock price scraper using Python that allows you to keep up with the current price of the stock of your interest. - -## Installation of Important Modules - -Before we begin, there are a few things we need to install. Navigate to your command line and type the following individually. - -```console -pip install requests -pip install beautifulsoup -``` - -With that installed on your computer, we can then import the modules necessary for this program to work. - -``` python -import bs4 -import requests -from bs4 import BeautifulSoup -``` - -Keeping it concise, the bs4 module is a powerful library that allows us to access web pages, APIs, post to forms, and much more operations. The requests module allows you to send HTTP requests. Lastly, the Beautiful Soup (BS4) module sounds like a silly module but in reality it is a parsing library useful for extracting data from HTML / XML documents. - -## Implementation of Program - -For the website that I decided to web scrape, I chose a simple one *https://finance.yahoo.com/quote/FB?p=FB*. With this url, I was able to use the requests module to access all of the response data. - -```python -url = requests.get('https://finance.yahoo.com/quote/FB?p=FB') -``` - -If you navigate to the link, you can see that we are interested in the Facebook stock. - -Currently, the price of one stock of Facebook is 234.91. This may or may not be different for you so it is perfectly fine if it is! - - -Now we have to utilize the BS4 as well as the BeautifulSoup module, which allows us to extract data from HTML. - -```python -soup = bs4.BeautifulSoup(url.text, features="html.parser") -``` - - -We're not done yet! - -As you may know, a website consists of a plethora of information. From a programmers' perspective, if you tried to understand each and every function contained within the HTML/CSS tag, it would be virtually impossible. However, we can grab just the information that we need, which is the stock price (234.91 when blog was created). - -```python -price = soup.find_all("div", {'class': 'My(6px) Pos(r) smartphone_Mt(6px)'})[0].find('span').text -``` - - -Now we have all the information that we need to complete this function, so we can just return the price. - -```python -return price -``` - -To sum up, the purpose of this program is to constantly be able to know the current price of the stock that we are interested in, so we don't want sell our stock at a lower price than we intend. Therefore, we can just put the implementation above into a while loop. - -```python -while True: - print("The current price: " + str(parsePrice())) -``` - -## Conclusion - -That's it! We've fully implemented a real time stock price scraper using Python. You no longer have to go through the trouble of looking up the price of a stock. You can simply run this program! Here's how our output should look like. - -![](https://i.imgur.com/3xBFRuj.png) - diff --git a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md b/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md deleted file mode 100644 index 11ae53d..0000000 --- a/Machine-Learning/Jason Hong/Web-Scraping-Current-Stock-Price/proposal.md +++ /dev/null @@ -1,35 +0,0 @@ -# TEMPLATE - -## :fire: Do not edit this file - copy the template and create your own file. - -**[Step-By-Step Technical Blog Guide](https://hq.bitproject.org/how-to-write-a-technical-blog/)** - -### :pushpin: Step 1 -**TITLE:** -Web-Scraping-Current-Stock-Price - -**TOPIC:** -Machine Learning - -**DESCRIPTION (5-7+ sentences):** -The topic that I decided to focus on is web scraping of stock prices. This means that whenever a stock price changes, the program will have the functionality of outputting that information. The flexibility and simplicity of this program that I implemented is evident of how easy web scraping is to learn and how accessible it is to anyone that has a computer and an IDE. Anyone can learn how to web scrape and this is at its most basic form. Web scraping has been something that I've been wanting to learn for awhile now, so this was the perfect opportunity for me to learn and educate others on what I learned throughout this process. - -### :pushpin: Step 2 -:family: **TARGET AUDIENCE (3-5+ sentences):** -Beginning Programmers with minimal knowledge in coding - -### :pushpin: Step 3 -> Outline your learning/teaching structure: -My teaching structure is straight to the point and step by step. I don't want to bore those that don't want to read a full essay about a simple topic. Especially in this generation, people hate reading, so I think including images and getting straight to the point will be quickest and easiest to learn. - -**Beginning (2-3+ sentences):** -I will discuss about my motive for creating the blog. Out of all the ideas that I could've chosen, why did I choose this one? Unrelated to my motive, I will transition into talking about the modules necessary to be imported and installed on your operating system for this program to work. - -**Middle (2-3+ sentences):** -I will discuss about how to actually implement the program. Each code will be explained concisely. I have about 10-15 lines of code intended to be explained, so I don't want to include paragraphs about each line of code when some of it is self explanatory. - -**End (2-3+ sentences):** -Lastly, I will talk about how this topic relates to the real world and why it is relevant in the society that we live in today. I will also provide screenshots of what the output looks like and provide an ending statement to encourage others to pursue more web scraping! - -** Loom Video Link: ** -https://www.loom.com/share/edfaca5e142048e688d29bfd31972943