Skip to content

Commit e50ad47

Browse files
Merge pull request #46 from ISISComputingGroup/datastreaming_local_instance
Add docs on running own kafka instance
2 parents 68e4d3a + cc53b85 commit e50ad47

File tree

3 files changed

+27
-12
lines changed

3 files changed

+27
-12
lines changed

doc/specific_iocs/dae/Datastreaming.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,10 @@ Automation team. See `\\isis\shares\ISIS_Experiment_Controls\On Call\autoreducti
3636
support information.
3737
:::
3838

39+
### I want my own local instance of Kafka
40+
41+
See {ref}`localredpanda`
42+
3943
## Neutron Data
4044

4145
The ICP on any instrument that is running in full event mode and with a DAE3 may stream neutron events into Kafka.

doc/specific_iocs/dae/datastreaming/Datastreaming---Sample-Environment.md

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,25 @@
22

33
All IBEX instruments are currently forwarding their sample environment PVs into Kafka. This is done in two parts:
44

5-
### BlockserverToKafka
5+
## BlockserverToKafka
6+
67
This is a Python process that runs on each NDX (see code [here](https://github.com/ISISComputingGroup/EPICS-inst_servers/tree/master/BlockServerToKafka)) it monitors the blockserver config PVs and any time the config changes it pushes a new configuration to the forwarder, via a Kafka topic `forwarder_config`. This is a process written and managed by IBEX developers.
78

8-
The instrument name for the BlockServerToKafka service is `BSKAFKA`.
9+
The `procserv` name for the BlockServerToKafka service is `BSKAFKA`.
910

10-
### Forwarder
11-
This is a Python program responsible for taking the EPICS data and pushing into Kafka. ISIS currently has two instances of the forwarder running (one for the production and one for development). They are both running as services (Developer Forwarder and Production Forwarder) under `nssm` on NDADATASTREAM, which can be accessed via the `ibexbuilder` account. The configuration files and logs for these forwarders are located in `C:\Forwarder\dev_forwarder` and `C:\Forwarder\prod_forwarder`. The actual source lives in `C:\forwarder\fw_py`, updating it is a case of running `git pull` and re-installing the requirements.
11+
## Forwarder
1212

1313
Source for the forwarder is available [here](https://github.com/ess-dmsc/forwarder)
1414

15-
We don't currently run this, and need to figure out topology ie. running a central forwarder, one per instrument and so on.
15+
We don't currently run this for every instrument, and need to figure out topology ie. running a central forwarder, one per instrument and so on.
16+
17+
### Forwarder on HIFI
18+
19+
HIFI has an instance of the forwarder currently running under procserv within IBEX for the SuperMuSR project.
20+
21+
in `C:\Instrument\Apps\EPICS\utils\build_ioc_startups.py` we have hotfixed this line:
22+
`ioc_startups.add("FWDR", IocStartup(os.path.join("C:\\", "instrument", "dev", "forwarder"), description="forward epics to kafka", exe="forwarder_launch.bat", iocexe="procServ.exe"))`
23+
24+
to add a Procserv entry that runs it.
1625

17-
_NB: The forwarder was previously written in C++ but has now migrated to Python instead._
26+
HIFI's `ISIS/inst_servers/master/start_bs_to_kafka_cmd.bat` points to the SuperMuSR Redpanda instance rather than the normal `livedata` cluster.

doc/specific_iocs/dae/datastreaming/Datastreaming-How-To.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,20 +5,22 @@ This is a guide for basic operations using either the development or production
55
Note that there are many ways to do the following, what is written here is the way commonly done at ISIS on our development and production clusters. Something like `kafka-tool` is a nice GUI that will list topics, brokers, etc and create or delete topics. You may have more luck running things like `kafkacat`, `kafkacow` or any of the official Kafka scripts under the [Windows subsystem for linux](https://docs.microsoft.com/en-gb/windows/wsl/install-win10)
66

77

8-
98
## Topic operations
109

11-
Pushing to one topic does not necessarily mean that the other topics in the cluster receive the data and replicate it, so use with caution. If you need to create a topic that is replicated through all of the topics you should probably follow [this guide](https://coralogix.com/blog/create-kafka-topics-in-3-easy-steps/) by `ssh` on the actual server machines themselves.
12-
1310
### Create a new topic
1411

15-
There is a script in the [isis-filewriter](https://github.com/ISISComputingGroup/isis-filewriter/tree/master/scripts) repository which will create a script for you. It takes a broker, topic name, and number of partitions (usually 1 partition is fine for a basic topic, more for concurrent streams)
12+
This can be done through Redpanda console or via a Kafka API call.
1613

1714
### List topics
1815

19-
To list topics on a broker you need to use the metadata API. GUIs such as offset-explorer can do this quite easily, or you can use [Kafkacat](https://github.com/edenhill/kafkacat) or [Kafkacow](https://github.com/ess-dmsc/kafkacow)
16+
This can be done through Redpanda console or via a Kafka API call.
2017

2118
### Viewing or "consuming" data from a topic
2219

23-
Like above, the best way of doing this programmatically is by using the API in your given language. [Saluki](https://github.com/rerpha/saluki) does this and de-serialises from the relevant flatbuffers schema and prints it out ie. `ev42`/`ev44` for event data - [see the full list of schemas](https://github.com/ess-dmsc/streaming-data-types)
20+
[Saluki](https://github.com/ISISComputingGroup/saluki) can be used for de-serialising the flatbuffers-encoded blobs that are pushed into Kafka.
21+
22+
23+
{#localredpanda}
24+
## Run my own instance of Kafka/Redpanda
2425

26+
This is done easily by running [this](https://docs.redpanda.com/redpanda-labs/docker-compose/single-broker/#run-the-lab) `docker-compose` file.

0 commit comments

Comments
 (0)