Create pull request/add yaml to load data (#1842)

* Add YAML to formats supported by load_data()

A fairly trivial addition; JSON and YAML are handled so similarly
that this was a matter of copying the JSON-relevant handlers and
editing the copies to handle YAML as well.  The test file was
literally generated with 'json2yaml'.

The documentation has been updated to indicate that load_data() now
handles YAML code.

The CHANGELOG has been updated as well.

* After checking, I found that it's generally agreed the mime type is still application/x-yaml.

* Update comment, unify library importing.

I noticed one more place where the list of formats was supported,
and added YAML to that list.

I noticed that there's a singular place to load the `libs::` crate,
and unified by importing of serde_yaml in that place.
This commit is contained in:
Ken "Elf" Mathieu Sternberg 2022-04-29 11:22:02 -07:00 committed by GitHub
parent 3a0800c702
commit 77eb9fef9b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 48 additions and 5 deletions

View File

@ -13,6 +13,7 @@ also specify classes on headers now
- Make `ignored_content` work with nested paths and directories - Make `ignored_content` work with nested paths and directories
- `zola serve/build` can now run from anywhere in a zola directory - `zola serve/build` can now run from anywhere in a zola directory
- Add XML support to `load_data` - Add XML support to `load_data`
- Add YAML support to `load_data`
- `skip_prefixes` is now checked before parsing external link URLs - `skip_prefixes` is now checked before parsing external link URLs
- Add `render` attribute to taxonomies configuration in `config.toml`, for when you don't want to render - Add `render` attribute to taxonomies configuration in `config.toml`, for when you don't want to render
any pages related to that taxonomy any pages related to that taxonomy

View File

@ -12,7 +12,7 @@ use libs::tera::{
from_value, to_value, Error, Error as TeraError, Function as TeraFn, Map, Result, Value, from_value, to_value, Error, Error as TeraError, Function as TeraFn, Map, Result, Value,
}; };
use libs::url::Url; use libs::url::Url;
use libs::{nom_bibtex, serde_json, toml}; use libs::{nom_bibtex, serde_json, serde_yaml, toml};
use utils::de::fix_toml_dates; use utils::de::fix_toml_dates;
use utils::fs::{get_file_time, read_file}; use utils::fs::{get_file_time, read_file};
@ -47,6 +47,7 @@ enum OutputFormat {
Bibtex, Bibtex,
Plain, Plain,
Xml, Xml,
Yaml,
} }
impl FromStr for OutputFormat { impl FromStr for OutputFormat {
@ -60,6 +61,7 @@ impl FromStr for OutputFormat {
"bibtex" => Ok(OutputFormat::Bibtex), "bibtex" => Ok(OutputFormat::Bibtex),
"xml" => Ok(OutputFormat::Xml), "xml" => Ok(OutputFormat::Xml),
"plain" => Ok(OutputFormat::Plain), "plain" => Ok(OutputFormat::Plain),
"yaml" => Ok(OutputFormat::Yaml),
format => Err(format!("Unknown output format {}", format).into()), format => Err(format!("Unknown output format {}", format).into()),
} }
} }
@ -74,6 +76,7 @@ impl OutputFormat {
OutputFormat::Bibtex => "application/x-bibtex", OutputFormat::Bibtex => "application/x-bibtex",
OutputFormat::Xml => "text/xml", OutputFormat::Xml => "text/xml",
OutputFormat::Plain => "text/plain", OutputFormat::Plain => "text/plain",
OutputFormat::Yaml => "application/x-yaml",
}) })
} }
} }
@ -208,7 +211,7 @@ fn add_headers_from_args(header_args: Option<Vec<String>>) -> Result<HeaderMap>
} }
/// A Tera function to load data from a file or from a URL /// A Tera function to load data from a file or from a URL
/// Currently the supported formats are json, toml, csv, bibtex and plain text /// Currently the supported formats are json, toml, csv, yaml, bibtex and plain text
#[derive(Debug)] #[derive(Debug)]
pub struct LoadData { pub struct LoadData {
base_path: PathBuf, base_path: PathBuf,
@ -388,6 +391,7 @@ impl TeraFn for LoadData {
OutputFormat::Json => load_json(data), OutputFormat::Json => load_json(data),
OutputFormat::Bibtex => load_bibtex(data), OutputFormat::Bibtex => load_bibtex(data),
OutputFormat::Xml => load_xml(data), OutputFormat::Xml => load_xml(data),
OutputFormat::Yaml => load_yaml(data),
OutputFormat::Plain => to_value(data).map_err(|e| e.into()), OutputFormat::Plain => to_value(data).map_err(|e| e.into()),
}; };
@ -406,6 +410,13 @@ fn load_json(json_data: String) -> Result<Value> {
Ok(json_content) Ok(json_content)
} }
/// Parse a YAML string and convert it to a Tera Value
fn load_yaml(yaml_data: String) -> Result<Value> {
let yaml_content: Value =
serde_yaml::from_str(yaml_data.as_str()).map_err(|e| format!("{:?}", e))?;
Ok(yaml_content)
}
/// Parse a TOML string and convert it to a Tera Value /// Parse a TOML string and convert it to a Tera Value
fn load_toml(toml_data: String) -> Result<Value> { fn load_toml(toml_data: String) -> Result<Value> {
let toml_content: toml::Value = toml::from_str(&toml_data).map_err(|e| format!("{:?}", e))?; let toml_content: toml::Value = toml::from_str(&toml_data).map_err(|e| format!("{:?}", e))?;
@ -1084,6 +1095,25 @@ mod tests {
) )
} }
#[test]
fn can_load_yaml() {
let static_fn = LoadData::new(PathBuf::from("../utils/test-files"), None, PathBuf::new());
let mut args = HashMap::new();
args.insert("path".to_string(), to_value("test.yaml").unwrap());
let result = static_fn.call(&args.clone()).unwrap();
assert_eq!(
result,
json!({
"key": "value",
"array": [1, 2, 3],
"subpackage": {
"subkey": 5
}
})
)
}
#[test] #[test]
fn is_load_remote_data_using_post_method_with_different_body_not_cached() { fn is_load_remote_data_using_post_method_with_different_body_not_cached() {
let _mjson = mock("POST", "/kr1zdgbm4y3") let _mjson = mock("POST", "/kr1zdgbm4y3")

View File

@ -0,0 +1,9 @@
---
key: "value"
array:
- 1
- 2
- 3
subpackage:
subkey: 5

View File

@ -259,7 +259,8 @@ The method returns a map containing `width`, `height` and `format` (the lowercas
### `load_data` ### `load_data`
Loads data from a file, URL, or string literal. Supported file types include *toml*, *json*, *csv*, *bibtex* and *xml* and only supports UTF-8 encoding. Loads data from a file, URL, or string literal. Supported file types include *toml*, *json*, *csv*, *bibtex*, *yaml*
and *xml* and only supports UTF-8 encoding.
Any other file type will be loaded as plain text. Any other file type will be loaded as plain text.
@ -293,7 +294,9 @@ The snippet below outputs the HTML from a Wikipedia page, or "No data found" if
{% if data %}{{ data | safe }}{% else %}No data found{% endif %} {% if data %}{{ data | safe }}{% else %}No data found{% endif %}
``` ```
The optional `format` argument allows you to specify and override which data type is contained within the specified file or URL. Valid entries are `toml`, `json`, `csv`, `bibtex`, `xml` or `plain`. If the `format` argument isn't specified, then the path extension is used. In the case of a literal, `plain` is assumed if `format` is unspecified. The optional `format` argument allows you to specify and override which data type is contained within the specified file or URL.
Valid entries are `toml`, `json`, `csv`, `bibtex`, `yaml`, `xml` or `plain`. If the `format` argument isn't specified, then the
path extension is used. In the case of a literal, `plain` is assumed if `format` is unspecified.
```jinja2 ```jinja2
@ -302,7 +305,7 @@ The optional `format` argument allows you to specify and override which data typ
Use the `plain` format for when your file has a supported extension but you want to load it as plain text. Use the `plain` format for when your file has a supported extension but you want to load it as plain text.
For *toml*, *json* and *xml*, the data is loaded into a structure matching the original data file; For *toml*, *json*, *yaml* and *xml*, the data is loaded into a structure matching the original data file;
however, for *csv* there is no native notion of such a structure. Instead, the data is separated however, for *csv* there is no native notion of such a structure. Instead, the data is separated
into a data structure containing *headers* and *records*. See the example below to see into a data structure containing *headers* and *records*. See the example below to see
how this works. how this works.