Benutzer-Werkzeuge

Webseiten-Werkzeuge


fileformats:json

Unterschiede

Hier werden die Unterschiede zwischen zwei Versionen angezeigt.

Link zu dieser Vergleichsansicht

Beide Seiten der vorigen RevisionVorhergehende Überarbeitung
Nächste Überarbeitung
Vorhergehende Überarbeitung
fileformats:json [2018/01/05 17:44] – [Output of exported motion data] oliverfileformats:json [2018/01/09 10:45] (aktuell) – [Json data format] oliver
Zeile 3: Zeile 3:
  
 **MimeType:** text/x-json **MimeType:** text/x-json
 +
 +**Default File Suffix:** .json
 +
 +**Comments**
 +
 +Reading of json-files is not yet implemented.
  
 ==== Output of processed motion data ==== ==== Output of processed motion data ====
Zeile 48: Zeile 54:
 The Output of motion data exported into the json-format is different to the data saved after processed data: The Output of motion data exported into the json-format is different to the data saved after processed data:
  
-Each line is a valid JSON entry without separator at the end. This is similar to other readers and follows [[http://hadoop.apache.org|Hadoop]] convention. [[http://hadoop.apache.org|Hadoop]] and [[http://spark.apache.org/|Spark]] use this format to to make sure splits work properly in a cluster environment. For those new to [[http://hadoop.apache.org|Hadoop]] file format convention, the reason is a large file can be split into chunks and sent to different nodes in a cluster. If a record spanned multiple lines, split might not get the complete record, which will result in runtime errors and calculation errors. Where and how a job splits a file varies depending on the job configuration and cluster size.+Each line is a valid JSON entry without separator at the end. This is similar to other readers and follows [[http://hadoop.apache.org|Hadoop]] convention. [[http://hadoop.apache.org|Hadoop]] and [[http://spark.apache.org/|Spark]] use this format to make suresplits work properly in a cluster environment. For those new to [[http://hadoop.apache.org|Hadoop]] file format convention, the reason is a large file can be split into chunks and sent to different nodes in a cluster. If a record spanned multiple lines, split might not get the complete record, which will result in runtime errors and calculation errors. Where and how a job splits a file varies depending on the job configuration and cluster size.
  
 === Example === === Example ===
Zeile 73: Zeile 79:
 </code> </code>
  
-If in the Export UI the labelset and the labelgroup is defined, than the labelgroup name is intepreted as a phase type name and only the frames of the given phase is exported.+If in the Export UI the "labelset name" and the "labelgroup name" is defined, than the "labelgroup nameis interpreted as a phase type name and only the frames of the given phase are exported.
fileformats/json.1515170643.txt.gz · Zuletzt geändert: 2018/01/05 17:44 von oliver

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki