Skip to content

Commit 0c59be4

Browse files
committed
typos
1 parent 24e214a commit 0c59be4

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Bioschemas is a community initiative to improve the findability of life science
1616

1717
Here, we consider the following dataset: [https://registry.dome-ml.org/review/dfyn1yvtz3#dataset](https://registry.dome-ml.org/review/dfyn1yvtz3#dataset). More details for this dataset can be found here in [this GitHub repository](https://github.com/RyanCook94/inphared).
1818

19-
After browsing the [Bioschemas dataset profile](https://bioschemas.org/profiles/Dataset/1.0-RELEASE), we figure out that we **must** provide the required metatdata, and we **should** provide the recommended metadata. For brevity, we will ommit the optional metadata.
19+
After browsing the [Bioschemas dataset profile](https://bioschemas.org/profiles/Dataset/1.0-RELEASE), we figure out that we **must** provide the required metadata, and we **should** provide the recommended metadata. For brevity, we will omit the optional metadata.
2020

2121
The **required metadata** fields are (`description`, `identifier`, `keywords`, `license`, `name`, `url`)
2222

@@ -54,7 +54,7 @@ For more details on JSON-LD encoding of Schema.org, please refer to the [Biosche
5454
The last step of the annotation process consists in making accessible the metadata. This can be done by adding the metadata to the HTML code of the dataset webpage.
5555

5656
## Step 3. Annotating an ML software by using the computational tool profile
57-
We have seen how to manually write Bioschemas metatadata in JSON-LD to annotate a sample dataset. Now, we will see how we can use a software registry to lighten the annotation process.
57+
We have seen how to manually write Bioschemas metadata in JSON-LD to annotate a sample dataset. Now, we will see how we can use a software registry to lighten the annotation process.
5858

5959
[Bio.tools](https://bio.tools) is a registry of software tools for the life sciences. It allow users to submit new tools, and to search for existing ones. During the submission process, users are asked to provide metadata about the tool. This metadata is then used to dynamically generate a JSON-LD representation on the web page describing the tool.
6060

@@ -67,7 +67,7 @@ If we check this web page with the [schema.org validation tool](https://validato
6767
**The key message here is that chosing the right registry is already a key step towards FAIRer ML resources**
6868

6969
## Step 4. Evaluating the global FAIRness of the annotated ML resources
70-
Finally we will breifly explore how to evaluate the FAIRness of the annotated resources.
70+
Finally we will briefly explore how to evaluate the FAIRness of the annotated resources.
7171
We will use the [FAIRChecker](https://fair-checker.france-bioinformatique.fr) tool. This tool allows to evaluate the FAIRness of a resource by checking the presence of semantic metadata.
7272

7373
If we consider a [ML dataset](https://www.kaggle.com/datasets/ankushpanday1/heart-attack-in-youth-vs-adult-in-germany) registered in Kaggle, we can see that the dataset is FAIR enough:

0 commit comments

Comments
 (0)