I did a bit of experimentation and looks like it's tricky to use this lib in Databricks.
Any way we can provide an interface that doesn't require the user to set a configuration option?
Perhaps we can let the user run an import statement like import org.apache.spark.sql.itachi.postgres._ to get all the functions? The function registration process is still a little fuzzy for me. Let me know if you think this would be possible!