Replies: 14 comments 15 replies
-
Hi @liuxinf , can you please attach the |
Beta Was this translation helpful? Give feedback.
-
Hello,How Do I use the pvc content as the input of other process in nextflow k8s?Must the pvc be mounted to the pod of the process?eg:Process A mount a pvc and uses file1 of the pvc.If file1 is sent to process B as the output of process A.process A sends it to process B.thanks.
…---Original---
From: "Ben ***@***.***>
Date: Thu, Oct 6, 2022 20:26 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] I am using Huawei Cloud CCE cluster to build nextflow,but now,The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
Hi @liuxinf , can you please attach the .nextflow.log file to help us debug this issue?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
thanks
…---Original---
From: "Ben ***@***.***>
Date: Mon, Oct 17, 2022 21:27 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
@liuxinf , I got an email with your nextflow log and config, but it is gone now. I think you should modify your nextflow config as follows:
process { executor = 'k8s' } k8s { namespace = 'nextflow' serviceAccount = 'tower-launcher-sa' storageClaimName = 'pvc-nextflow-nfs' pullPolicy = 'IfNotPresent' }
The autoMountHostPaths is only used for local development, and the storageSubPath should not be needed.
Also, please try the latest version of Nextflow (v22.10.0) as it includes some bug fixes to the k8s executor.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I would like to ask you some questions. I am using NextFlow-K8S, and I have some questions about the nextflow engine. How is the pod deployed for the nextflow engine, there is no documentation, and does it support multi-instance deployment or only single instance deployment? The basic configuration of the pod for nextflow engine, such as cpu, number of cores, memory, etc. depends on those factors. What is the minimum configuration? Thank you.
…---Original---
From: "Ben ***@***.***>
Date: Mon, Oct 17, 2022 21:27 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
@liuxinf , I got an email with your nextflow log and config, but it is gone now. I think you should modify your nextflow config as follows:
process { executor = 'k8s' } k8s { namespace = 'nextflow' serviceAccount = 'tower-launcher-sa' storageClaimName = 'pvc-nextflow-nfs' pullPolicy = 'IfNotPresent' }
The autoMountHostPaths is only used for local development, and the storageSubPath should not be needed.
Also, please try the latest version of Nextflow (v22.10.0) as it includes some bug fixes to the k8s executor.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thank you very much for your answer. It is the problem I described.I mean that the nextflow engine exists in the form of pod in the k8s cluster. Can the pod of this nextflow engine exist in multiple instances? Can it be controlled by k8s controller, such as deployment or yaml file, which can be directly deployed? This means single - and multi-instance deployment.
…---Original---
From: "Ben ***@***.***>
Date: Thu, Nov 3, 2022 22:21 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
Here are the relevant docs for the Nextflow / K8s integration:
k8s executor
k8s config scope
pod directive
Nextflow / K8s general info
When you use the k8s executor, every Nextflow task is deployed as a pod. There is a class called PodSpecBuilder which defines how a task is mapped to a pod spec. It is a combination of many standard Nextflow directives like cpus, memory, accelerator, etc, as well as a special pod directive which controls pod-specific options like config maps, secrets, etc.
You can also set k8s.computeResourceType = 'Job' to make every task a job instead of a raw pod.
I'm not sure what you mean by single-instance versus multi-instance deployment. You must provision the K8s cluster yourself, and it can have as much or as little resources as you want. You can run multiple Nextflow pipelines at the same time, if that's what you mean.
As for minimal configuration, you really only need to enable the k8s executor, provide a ReadWriteMany PVC, and provide credentials to use the cluster (via k8s service account or kubeconfig file). Other things like pod resources will have defaults if not provided.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thanks for your generous help
- Does the job support the mount mode of the local disk?
- How many volumes can you mount?
- What is the copy path of the input and output of a job, such as input -- copy to the working directory -- output to the working directory -- copy to the real output directory?
- Is it something else, or does each phase support configuration?
- How is the support for soft links used in the process?
- How does intermediate result cache reuse work, depending on the input and output paths, but what if the input file changes when the path to the input file is the same?
Our company is looking into using nextflow, but I am new to it, thank you for your answer. If you come to China in the future, I would appreciate it if you could contact me.
|
Beta Was this translation helpful? Give feedback.
-
Excuse me,ask a few more question. The open source version of nf-tower and those versions of nextflow are mached,please. At that time,only the --whih-tower mode was supported.That is,nf-tower was not supported to execute workflows on the GUI.
thank you!
|
Beta Was this translation helpful? Give feedback.
-
Thanks,How does an external component like nf-tower call nextflow to publish a job? nextflow has no interface to the outside world. Is it the command line that calls the job?
…---Original---
From: "Ben ***@***.***>
Date: Tue, Nov 29, 2022 04:25 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
Hi @liuxinf , the community edition of Tower (seqeralabs/nf-tower) currently only supports monitoring workflows. So you can launch Nextflow from the command line with the -with-tower option and monitor the pipeline in nf-tower, but you can't launch pipelines from the nf-tower GUI.
Instead, you should use Tower Cloud, which is the fully-featured edition of Tower hosted by Seqera. You can create a free account, connect to your K8s cluster, and run an example pipeline to see how everything works. If you like it, you can upgrade to a paid account on Tower Cloud, or you can buy Tower Enterprise if you would rather have your own Tower instance.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
When I send a job from the tower, does it create a new nextflow driver pod each time, or is it just this one? Does it integrate the job running commands into the driver pod startup process
…---Original---
From: "Ben ***@***.***>
Date: Tue, Dec 6, 2022 00:41 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
In Tower Cloud/Enterprise, you create a "compute environment" which contains the information that Tower needs to connect to your K8s cluster. Then when you launch a job, Tower uses the K8s API to create a pod in your cluster that runs the Nextflow pipeline.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thank you.The configuration with cleanup supports the process scope level. Why does it not work? When my input is the working directory, how can the cache ensure that the change is the only confirmation, and is it the time to view the folder? There is also the relationship between input directory, temporary directory, working directory and output directory. thank you.
…---Original---
From: "Ben ***@***.***>
Date: Tue, Dec 6, 2022 22:26 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
Tower creates a new driver pod for each workflow run. The driver pod basically has a nextflow container that does nextflow run ....
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thanks,The premise is in the k8s environment. What are the functions of stageInMODE and stageOutMOde, and why do they design this feature. Does stageinmode need to be used with scratch, or can it be used alone? Should stageoutmode be used together with publisher or separately. When stageinmode is not set, how the input file is staged-in the process working directory, and how the output file is staged-out from the staging directory to the process working directory when stageoutmode is not used.
…---Original---
From: "Ben ***@***.***>
Date: Thu, Dec 8, 2022 00:04 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
The configuration with cleanup supports the process scope level. Why does it not work?
Are you asking about the cleanup config option? That option is global, it can't be applied to individual processes.
When my input is the working directory, how can the cache ensure that the change is the only confirmation, and is it the time to view the folder?
I'm sorry, I don't know what you're asking. If you're wondering about how Nextflow decides whether a cached file is still valid, see the cache directive. Nextflow uses the name, size, and timestamp by default, but you can optionally make it lenient (timestamp only) or deep (file contents).
There is also the relationship between input directory, temporary directory, working directory and output directory.
It can be useful to print some of the workflow variables during a pipeline run to help you understand these (see the docs):
The launch directory is where nextflow run is executed
The work directory is where Nextflow stores task directories, default is ${launchDir}/work
Every task has its own directory based on the work directory and the task hash, for example if a task's hash is a15f7282d24818c725ec99c1d0e38210 then it's task directory will be ${workDir}/a1/5f7282d24818c725ec99c1d0e38210
Nextflow doesn't have a specific input or output directory. You can provide input files from anywhere, and you can publish output files to anywhere with the publishDir directive
Output files are created in their task directory, but if you want Nextflow to save them as workflow outputs then you should use publishDir to publish them to an output directory of your choice
If you enable the scratch directive, each task will perform its work in a scratch directory and stage its outputs to its task directory. To do this in Kubernetes, you will need to attach local storage to pods with an emptyDir volume and request disk storage with the disk directive. Using scratch storage can help if your local storage is much faster than shared storage, but it also requires extra copying so you need to make sure it's actually worthwhile.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Ask more questions:
Does nf power create a nextflow driver pod before releasing a workflow, or does nf tower create a nextflow driver pod every time a process is executed. When will the nextflow driver pod be destroyed? When will the job run? How does nf tower pass the nextflow run instructions to the nextflow driver pod, and how does the nextflow driver pod pass the execution instructions to the process pod.
Thank you very much.
…---Original---
From: "Ben ***@***.***>
Date: Thu, Dec 8, 2022 00:04 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
The configuration with cleanup supports the process scope level. Why does it not work?
Are you asking about the cleanup config option? That option is global, it can't be applied to individual processes.
When my input is the working directory, how can the cache ensure that the change is the only confirmation, and is it the time to view the folder?
I'm sorry, I don't know what you're asking. If you're wondering about how Nextflow decides whether a cached file is still valid, see the cache directive. Nextflow uses the name, size, and timestamp by default, but you can optionally make it lenient (timestamp only) or deep (file contents).
There is also the relationship between input directory, temporary directory, working directory and output directory.
It can be useful to print some of the workflow variables during a pipeline run to help you understand these (see the docs):
The launch directory is where nextflow run is executed
The work directory is where Nextflow stores task directories, default is ${launchDir}/work
Every task has its own directory based on the work directory and the task hash, for example if a task's hash is a15f7282d24818c725ec99c1d0e38210 then it's task directory will be ${workDir}/a1/5f7282d24818c725ec99c1d0e38210
Nextflow doesn't have a specific input or output directory. You can provide input files from anywhere, and you can publish output files to anywhere with the publishDir directive
Output files are created in their task directory, but if you want Nextflow to save them as workflow outputs then you should use publishDir to publish them to an output directory of your choice
If you enable the scratch directive, each task will perform its work in a scratch directory and stage its outputs to its task directory. To do this in Kubernetes, you will need to attach local storage to pods with an emptyDir volume and request disk storage with the disk directive. Using scratch storage can help if your local storage is much faster than shared storage, but it also requires extra copying so you need to make sure it's actually worthwhile.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thanks.How to configure the log level when executing nextflow run? I want to see the trace level logs in. nextflow.log. How to configure this. thank!
…---Original---
From: "Ben ***@***.***>
Date: Thu, Dec 8, 2022 23:24 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
Tower creates a driver pod for every workflow run, and it uses a prebuilt "nf-launcher" image to run the pipeline. Tower provides all of the parameters like pipeline name, params, config files, input data, etc, and the driver pod executes the nextflow run command. It is the normal setup for a pod where you specify a container and a command to run. So the driver pod is destroyed when the workflow completes.
Similarly, when Nextflow launches a task, it creates a pod spec with the task container and a command which runs the task script. You can use kubectl to inspect these pods as they are running to see all of this information.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Excuse me, how to limit the size of the working directory/task directory, and how to deal with it if the working directory is full。Thanks.
…---Original---
From: "Ben ***@***.***>
Date: Thu, Dec 8, 2022 23:24 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [nextflow-io/nextflow] The created pod cannot run normally,log "/bin/bash: .command.run: No such file or directory"," (Discussion #3267)
Tower creates a driver pod for every workflow run, and it uses a prebuilt "nf-launcher" image to run the pipeline. Tower provides all of the parameters like pipeline name, params, config files, input data, etc, and the driver pod executes the nextflow run command. It is the normal setup for a pod where you specify a container and a command to run. So the driver pod is destroyed when the workflow completes.
Similarly, when Nextflow launches a task, it creates a pod spec with the task container and a command which runs the task script. You can use kubectl to inspect these pods as they are running to see all of this information.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
"/bin/bash: .command.run: No such file or directory","Process assignedTask (1) terminated for an unknown reason -- Likely it has been terminated by the external system"
Beta Was this translation helpful? Give feedback.
All reactions