Upgrading PySpark#
Upgrading from PySpark 3.5 to 4.0#
- In Spark 4.0, Python 3.8 support was dropped in PySpark. 
- In Spark 4.0, the minimum supported version for Pandas has been raised from 1.0.5 to 2.0.0 in PySpark. 
- In Spark 4.0, the minimum supported version for Numpy has been raised from 1.15 to 1.21 in PySpark. 
- In Spark 4.0, the minimum supported version for PyArrow has been raised from 4.0.0 to 10.0.0 in PySpark. 
- In Spark 4.0, - Int64Indexand- Float64Indexhave been removed from pandas API on Spark,- Indexshould be used directly.
- In Spark 4.0, - DataFrame.iteritemshas been removed from pandas API on Spark, use- DataFrame.itemsinstead.
- In Spark 4.0, - Series.iteritemshas been removed from pandas API on Spark, use- Series.itemsinstead.
- In Spark 4.0, - DataFrame.appendhas been removed from pandas API on Spark, use- ps.concatinstead.
- In Spark 4.0, - Series.appendhas been removed from pandas API on Spark, use- ps.concatinstead.
- In Spark 4.0, - DataFrame.madhas been removed from pandas API on Spark.
- In Spark 4.0, - Series.madhas been removed from pandas API on Spark.
- In Spark 4.0, - na_sentinelparameter from- Index.factorizeand- Series.factorizehas been removed from pandas API on Spark, use- use_na_sentinelinstead.
- In Spark 4.0, - inplaceparameter from- Categorical.add_categories,- Categorical.remove_categories,- Categorical.set_categories,- Categorical.rename_categories,- Categorical.reorder_categories,- Categorical.as_ordered,- Categorical.as_unorderedhave been removed from pandas API on Spark.
- In Spark 4.0, - inplaceparameter from- CategoricalIndex.add_categories,- CategoricalIndex.remove_categories,- CategoricalIndex.remove_unused_categories,- CategoricalIndex.set_categories,- CategoricalIndex.rename_categories,- CategoricalIndex.reorder_categories,- CategoricalIndex.as_ordered,- CategoricalIndex.as_unorderedhave been removed from pandas API on Spark.
- In Spark 4.0, - closedparameter from- ps.date_rangehas been removed from pandas API on Spark.
- In Spark 4.0, - include_startand- include_endparameters from- DataFrame.between_timehave been removed from pandas API on Spark, use- inclusiveinstead.
- In Spark 4.0, - include_startand- include_endparameters from- Series.between_timehave been removed from pandas API on Spark, use- inclusiveinstead.
- In Spark 4.0, the various datetime attributes of - DatetimeIndex(- day,- month,- yearetc.) are now- int32instead of- int64from pandas API on Spark.
- In Spark 4.0, - sort_columnsparameter from- DataFrame.plotand Series.plot` has been removed from pandas API on Spark.
- In Spark 4.0, the default value of - regexparameter for- Series.str.replacehas been changed from- Trueto- Falsefrom pandas API on Spark. Additionally, a single character- patwith- regex=Trueis now treated as a regular expression instead of a string literal.
- In Spark 4.0, the resulting name from - value_countsfor all objects sets to- 'count'(or- 'proportion'if- normalize=Truewas passed) from pandas API on Spark, and the index will be named after the original object.
- In Spark 4.0, - squeezeparameter from- ps.read_csvand- ps.read_excelhas been removed from pandas API on Spark.
- In Spark 4.0, - null_countsparameter from- DataFrame.infohas been removed from pandas API on Spark, use- show_countsinstead.
- In Spark 4.0, the result of - MultiIndex.appenddoes not keep the index names from pandas API on Spark.
- In Spark 4.0, - DataFrameGroupBy.aggwith lists respecting- as_index=Falsefrom pandas API on Spark.
- In Spark 4.0, - DataFrame.stackguarantees the order of existing columns instead of sorting them lexicographically from pandas API on Spark.
- In Spark 4.0, - Trueor- Falseto- inclusiveparameter from- Series.betweenhas been removed from pandas API on Spark, use- bothor- neitherinstead respectively.
- In Spark 4.0, - Index.asi8has been removed from pandas API on Spark, use- Index.astypeinstead.
- In Spark 4.0, - Index.is_type_compatiblehas been removed from pandas API on Spark, use- Index.isininstead.
- In Spark 4.0, - col_spaceparameter from- DataFrame.to_latexand- Series.to_latexhas been removed from pandas API on Spark.
- In Spark 4.0, - DataFrame.to_spark_iohas been removed from pandas API on Spark, use- DataFrame.spark.to_spark_ioinstead.
- In Spark 4.0, - Series.is_monotonicand- Index.is_monotonichave been removed from pandas API on Spark, use- Series.is_monotonic_increasingor- Index.is_monotonic_increasinginstead respectively.
- In Spark 4.0, - DataFrame.get_dtype_countshas been removed from pandas API on Spark, use- DataFrame.dtypes.value_counts()instead.
- In Spark 4.0, - encodingparameter from- DataFrame.to_exceland- Series.to_excelhave been removed from pandas API on Spark.
- In Spark 4.0, - verboseparameter from- DataFrame.to_exceland- Series.to_excelhave been removed from pandas API on Spark.
- In Spark 4.0, - mangle_dupe_colsparameter from- read_csvhas been removed from pandas API on Spark.
- In Spark 4.0, - DataFrameGroupBy.backfillhas been removed from pandas API on Spark, use- DataFrameGroupBy.bfillinstead.
- In Spark 4.0, - DataFrameGroupBy.padhas been removed from pandas API on Spark, use- DataFrameGroupBy.ffillinstead.
- In Spark 4.0, - Index.is_all_dateshas been removed from pandas API on Spark.
- In Spark 4.0, - convert_floatparameter from- read_excelhas been removed from pandas API on Spark.
- In Spark 4.0, - mangle_dupe_colsparameter from- read_excelhas been removed from pandas API on Spark.
- In Spark 4.0, - DataFrame.koalashas been removed from pandas API on Spark, use- DataFrame.pandas_on_sparkinstead.
- In Spark 4.0, - DataFrame.to_koalashas been removed from PySpark, use- DataFrame.pandas_apiinstead.
- In Spark 4.0, - DataFrame.to_pandas_on_sparkhas been removed from PySpark, use- DataFrame.pandas_apiinstead.
- In Spark 4.0, - DatatimeIndex.weekand- DatatimeIndex.weekofyearhave been removed from Pandas API on Spark, use- DatetimeIndex.isocalendar().weekinstead.
- In Spark 4.0, - Series.dt.weekand- Series.dt.weekofyearhave been removed from Pandas API on Spark, use- Series.dt.isocalendar().weekinstead.
- In Spark 4.0, when applying - astypeto a decimal type object, the existing missing value is changed to- Trueinstead of- Falsefrom Pandas API on Spark.
- In Spark 4.0, - pyspark.testing.assertPandasOnSparkEqualhas been removed from Pandas API on Spark, use- pyspark.pandas.testing.assert_frame_equalinstead.
- In Spark 4.0, the aliases - Y,- M,- H,- T,- Shave been deprecated from Pandas API on Spark, use- YE,- ME,- h,- min,- sinstead respectively.
- In Spark 4.0, the schema of a map column is inferred by merging the schemas of all pairs in the map. To restore the previous behavior where the schema is only inferred from the first non-null pair, you can set - spark.sql.pyspark.legacy.inferMapTypeFromFirstPair.enabledto- true.
- In Spark 4.0, compute.ops_on_diff_frames is on by default. To restore the previous behavior, set compute.ops_on_diff_frames to false. 
- In Spark 4.0, the data type YearMonthIntervalType in - DataFrame.collectno longer returns the underlying integers. To restore the previous behavior, set- PYSPARK_YM_INTERVAL_LEGACYenvironment variable to- 1.
Upgrading from PySpark 3.3 to 3.4#
- In Spark 3.4, the schema of an array column is inferred by merging the schemas of all elements in the array. To restore the previous behavior where the schema is only inferred from the first element, you can set - spark.sql.pyspark.legacy.inferArrayTypeFromFirstElement.enabledto- true.
- In Spark 3.4, if Pandas on Spark API - Groupby.apply’s- funcparameter return type is not specified and- compute.shortcut_limitis set to 0, the sampling rows will be set to 2 (ensure sampling rows always >= 2) to make sure infer schema is accurate.
- In Spark 3.4, if Pandas on Spark API - Index.insertis out of bounds, will raise IndexError with- index {} is out of bounds for axis 0 with size {}to follow pandas 1.4 behavior.
- In Spark 3.4, the series name will be preserved in Pandas on Spark API - Series.modeto follow pandas 1.4 behavior.
- In Spark 3.4, the Pandas on Spark API - Index.__setitem__will first to check- valuetype is- Columntype to avoid raising unexpected- ValueErrorin- is_list_likelike Cannot convert column into bool: please use ‘&’ for ‘and’, ‘|’ for ‘or’, ‘~’ for ‘not’ when building DataFrame boolean expressions..
- In Spark 3.4, the Pandas on Spark API - astype('category')will also refresh- categories.dtypeaccording to original data- dtypeto follow pandas 1.4 behavior.
- In Spark 3.4, the Pandas on Spark API supports groupby positional indexing in - GroupBy.headand- GroupBy.tailto follow pandas 1.4. Negative arguments now work correctly and result in ranges relative to the end and start of each group, Previously, negative arguments returned empty frames.
- In Spark 3.4, the infer schema process of - groupby.applyin Pandas on Spark, will first infer the pandas type to ensure the accuracy of the pandas- dtypeas much as possible.
- In Spark 3.4, the - Series.concatsort parameter will be respected to follow pandas 1.4 behaviors.
- In Spark 3.4, the - DataFrame.__setitem__will make a copy and replace pre-existing arrays, which will NOT be over-written to follow pandas 1.4 behaviors.
- In Spark 3.4, the - SparkSession.sqland the Pandas on Spark API- sqlhave got new parameter- argswhich provides binding of named parameters to their SQL literals.
- In Spark 3.4, Pandas API on Spark follows for the pandas 2.0, and some APIs were deprecated or removed in Spark 3.4 according to the changes made in pandas 2.0. Please refer to the [release notes of pandas](https://pandas.pydata.org/docs/dev/whatsnew/) for more details. 
- In Spark 3.4, the custom monkey-patch of - collections.namedtuplewas removed, and- cloudpicklewas used by default. To restore the previous behavior for any relevant pickling issue of- collections.namedtuple, set- PYSPARK_ENABLE_NAMEDTUPLE_PATCHenvironment variable to- 1.
Upgrading from PySpark 3.2 to 3.3#
- In Spark 3.3, the - pyspark.pandas.sqlmethod follows [the standard Python string formatter](https://docs.python.org/3/library/string.html#format-string-syntax). To restore the previous behavior, set- PYSPARK_PANDAS_SQL_LEGACYenvironment variable to- 1.
- In Spark 3.3, the - dropmethod of pandas API on Spark DataFrame supports dropping rows by- index, and sets dropping by index instead of column by default.
- In Spark 3.3, PySpark upgrades Pandas version, the new minimum required version changes from 0.23.2 to 1.0.5. 
- In Spark 3.3, the - reprreturn values of SQL DataTypes have been changed to yield an object with the same value when passed to- eval.
Upgrading from PySpark 3.1 to 3.2#
- In Spark 3.2, the PySpark methods from sql, ml, spark_on_pandas modules raise the - TypeErrorinstead of- ValueErrorwhen are applied to a param of inappropriate type.
- In Spark 3.2, the traceback from Python UDFs, pandas UDFs and pandas function APIs are simplified by default without the traceback from the internal Python workers. In Spark 3.1 or earlier, the traceback from Python workers was printed out. To restore the behavior before Spark 3.2, you can set - spark.sql.execution.pyspark.udf.simplifiedTraceback.enabledto- false.
- In Spark 3.2, pinned thread mode is enabled by default to map each Python thread to the corresponding JVM thread. Previously, one JVM thread could be reused for multiple Python threads, which resulted in one JVM thread local being shared to multiple Python threads. Also, note that now - pyspark.InheritableThreador- pyspark.inheritable_thread_targetis recommended to use together for a Python thread to properly inherit the inheritable attributes such as local properties in a JVM thread, and to avoid a potential resource leak issue. To restore the behavior before Spark 3.2, you can set- PYSPARK_PIN_THREADenvironment variable to- false.
Upgrading from PySpark 2.4 to 3.0#
- In Spark 3.0, PySpark requires a pandas version of 0.23.2 or higher to use pandas related functionality, such as - toPandas,- createDataFramefrom pandas DataFrame, and so on.
- In Spark 3.0, PySpark requires a PyArrow version of 0.12.1 or higher to use PyArrow related functionality, such as - pandas_udf,- toPandasand- createDataFramewith “spark.sql.execution.arrow.enabled=true”, etc.
- In PySpark, when creating a - SparkSessionwith- SparkSession.builder.getOrCreate(), if there is an existing- SparkContext, the builder was trying to update the- SparkConfof the existing- SparkContextwith configurations specified to the builder, but the- SparkContextis shared by all- SparkSessions, so we should not update them. In 3.0, the builder comes to not update the configurations. This is the same behavior as Java/Scala API in 2.3 and above. If you want to update them, you need to update them prior to creating a- SparkSession.
- In PySpark, when Arrow optimization is enabled, if Arrow version is higher than 0.11.0, Arrow can perform safe type conversion when converting pandas.Series to an Arrow array during serialization. Arrow raises errors when detecting unsafe type conversions like overflow. You enable it by setting - spark.sql.execution.pandas.convertToArrowArraySafelyto true. The default setting is false. PySpark behavior for Arrow versions is illustrated in the following table:- PyArrow version - Integer overflow - Floating point truncation - 0.11.0 and below - Raise error - Silently allows - > 0.11.0, arrowSafeTypeConversion=false - Silent overflow - Silently allows - > 0.11.0, arrowSafeTypeConversion=true - Raise error - Raise error 
- In Spark 3.0, - createDataFrame(..., verifySchema=True)validates LongType as well in PySpark. Previously, LongType was not verified and resulted in None in case the value overflows. To restore this behavior, verifySchema can be set to False to disable the validation.
- As of Spark 3.0, - Rowfield names are no longer sorted alphabetically when constructing with named arguments for Python versions 3.6 and above, and the order of fields will match that as entered. To enable sorted fields by default, as in Spark 2.4, set the environment variable- PYSPARK_ROW_FIELD_SORTING_ENABLEDto true for both executors and driver - this environment variable must be consistent on all executors and driver; otherwise, it may cause failures or incorrect answers. For Python versions less than 3.6, the field names will be sorted alphabetically as the only option.
- In Spark 3.0, - pyspark.ml.param.shared.Has*mixins do not provide any- set*(self, value)setter methods anymore, use the respective- self.set(self.*, value)instead. See SPARK-29093 for details.
Upgrading from PySpark 2.3 to 2.4#
- In PySpark, when Arrow optimization is enabled, previously - toPandasjust failed when Arrow optimization is unable to be used whereas- createDataFramefrom Pandas DataFrame allowed the fallback to non-optimization. Now, both- toPandasand- createDataFramefrom Pandas DataFrame allow the fallback by default, which can be switched off by- spark.sql.execution.arrow.fallback.enabled.
Upgrading from PySpark 2.3.0 to 2.3.1 and above#
- As of version 2.3.1 Arrow functionality, including - pandas_udfand- toPandas()/- createDataFrame()with- spark.sql.execution.arrow.enabledset to- True, has been marked as experimental. These are still evolving and not currently recommended for use in production.
Upgrading from PySpark 2.2 to 2.3#
- In PySpark, now we need Pandas 0.19.2 or upper if you want to use Pandas related functionalities, such as - toPandas,- createDataFramefrom Pandas DataFrame, etc.
- In PySpark, the behavior of timestamp values for Pandas related functionalities was changed to respect session timezone. If you want to use the old behavior, you need to set a configuration - spark.sql.execution.pandas.respectSessionTimeZoneto False. See SPARK-22395 for details.
- In PySpark, - na.fill()or- fillnaalso accepts boolean and replaces nulls with booleans. In prior Spark versions, PySpark just ignores it and returns the original Dataset/DataFrame.
- In PySpark, - df.replacedoes not allow to omit value when- to_replaceis not a dictionary. Previously, value could be omitted in the other cases and had None by default, which is counterintuitive and error-prone.
Upgrading from PySpark 1.4 to 1.5#
- Resolution of strings to columns in Python now supports using dots (.) to qualify the column or access nested values. For example - df['table.column.nestedField']. However, this means that if your column name contains any dots you must now escape them using backticks (e.g.,- table.`column.with.dots`.nested).
- DataFrame.withColumn method in PySpark supports adding a new column or replacing existing columns of the same name. 
Upgrading from PySpark 1.0-1.2 to 1.3#
- When using DataTypes in Python you will need to construct them (i.e. - StringType()) instead of referencing a singleton.