You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Afraid i'm not nearly competent enough yet to try contribute but would love to someday maybe once i learn a lot more Go. In the meantime perhaps this can serve as a sort of feature request people can vote on, although i'm sure i'm probably not the first person to think of or maybe ask for this so apologies if i'm just making noise here.
In terms of a reproducible example of where i'm coming from:
package main
import (
"fmt""io/ioutil""net/http""regexp""strings""sync""github.com/go-gota/gota/dataframe"
)
// Create a wait groupvarwg sync.WaitGroup// Get api response (expects format=csv) and make a dataframe from itfuncgetDf(urlstring, cchan dataframe.DataFrame) {
// Need to make sure we tell wait group we donedeferwg.Done()
// Pull chart name from the urlre:=regexp.MustCompile("chart=(.*?)&")
match:=re.FindStringSubmatch(url)
chart:=match[1]
resp, _:=http.Get(url)
// Get body as string for ReadCSVbodyBytes, _:=ioutil.ReadAll(resp.Body)
bodyString:=string(bodyBytes)
df:=dataframe.ReadCSV(strings.NewReader(bodyString))
// Add chart suffix to each col name// (ignore first col which should be "time" and used for joins later)colnames:=df.Names()
fori, colname:=rangecolnames {
ifi!=0 {
df=df.Rename(chart+"|"+colname, colname)
}
}
// send df to channelc<-df
}
funcmain() {
// Define a list of api calls we want data from// In this example we have an api call for each chart data we want in our dfurls:= []string{
"https://london.my-netdata.io/api/v1/data?chart=system.cpu&format=csv&after=-10",
"https://london.my-netdata.io/api/v1/data?chart=system.net&format=csv&after=-10",
"https://london.my-netdata.io/api/v1/data?chart=system.load&format=csv&after=-10",
"https://london.my-netdata.io/api/v1/data?chart=system.io&format=csv&after=-10",
}
// Create a channel of dataframes the size of number of api calls we need to makedfChannel:=make(chan dataframe.DataFrame, len(urls))
// Create empty df we will outer join into from the df channel laterdf:=dataframe.ReadJSON(strings.NewReader(`[{"time":"1900-01-01 00:00:01"}]`))
// Kick off a go routine for each urlfor_, url:=rangeurls {
wg.Add(1)
gogetDf(url, dfChannel)
}
// Handle synchronization of channelwg.Wait()
close(dfChannel)
// Pull each df from the channel and outer join onto our original empty dffordfTmp:=rangedfChannel {
df=df.OuterJoin(dfTmp, "time")
}
// Sort based on timedf=df.Arrange(dataframe.Sort("time"))
// Print dffmt.Println(df, 10, 5)
// Describe df//fmt.Println(df.Describe())
}
The df i end up with above is outer join of lots of df's which may have different frequencies in time e.g. some have data every 5 seconds and some every 1 second. So i'd love to just ffill() all the NaN values to the last known value.
I'm guessing maybe this could be done with a custom function using Capply maybe. (?)
The text was updated successfully, but these errors were encountered:
I am new to Go and Gota and wondering if there is anything implemented yet that i could try leverage to forward fill my dataframe similar to pandas ffill: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ffill.html
Afraid i'm not nearly competent enough yet to try contribute but would love to someday maybe once i learn a lot more Go. In the meantime perhaps this can serve as a sort of feature request people can vote on, although i'm sure i'm probably not the first person to think of or maybe ask for this so apologies if i'm just making noise here.
In terms of a reproducible example of where i'm coming from:
The
df
i end up with above is outer join of lots of df's which may have different frequencies intime
e.g. some have data every 5 seconds and some every 1 second. So i'd love to just ffill() all the NaN values to the last known value.I'm guessing maybe this could be done with a custom function using Capply maybe. (?)
The text was updated successfully, but these errors were encountered: