Spark JSON File Operation

Estimated reading: 2 minutes 368 views

python JSON Overview

PySpark JSON functions are used to query or extract the elements from JSON string of DataFrame column by path,
convert it to struct, mapt type e.t.c, In this article, I will explain the most used JSON SQL functions with Python examples.

Reading JSON File.

.json() is used to read the JSON File.

					df1 ="/FileStore/tables/first.json")

explode in JSON

EXPLODE is a PySpark function used to works over columns in PySpark. EXPLODE is used for the analysis of nested column data.
PySpark EXPLODE converts the Array of Array Columns to row.
EXPLODE can be flattened up post analysis using the flatten method.

					import pyspark.sql.functions as f
from pyspark.sql.functions import explode
df2 ="/FileStore/tables/second.json")
dfDates =
dfContent =
dfFooBar ="", "col.value")

Reading a multiline JSON File

multiLine property is used to Read the MultiLine JSON File.

					df4 ="multiLine",True).option("mode","PERMISSIVE").json("/FileStore/tables/test_multiLine.json")

Leave a Comment