Big Data & Tools with NoSQL
  • Big Data & Tools
  • ReadMe
  • Big Data Overview
    • Overview
    • Job Opportunities
    • What is Data?
    • How does it help?
    • Types of Data
    • The Big 4 V's
      • Variety
      • Volume
      • Velocity
      • Veracity
      • Other V's
    • Trending Technologies
    • Big Data Concerns
    • Big Data Challenges
    • Data Integration
    • Scaling
      • CAP Theorem
      • Optimistic concurrency
      • Eventual consistency
      • Concurrent vs. Parallel Programming
    • Big Data Tools
    • No SQL Databases
    • What does Big Data learning means?
  • Linux & Tools
    • Overview
    • Linux Commands - 01
    • Linux Commands - 02
    • AWK
    • CSVKIT
    • CSVSQL
    • CSVGREP
  • Data Format
    • Storage Formats
    • CSV/TSV/Parquet
    • Parquet Example
    • JSON
    • HTTP & REST API
      • Terms to Know
        • Statefulness
        • Statelessness
        • Monolithic Architecture
        • Microservices
        • Idempotency
    • REST API
    • Python
      • Setup
      • Decorator
      • Unit Testing
      • Flask Demo
      • Flask Demo - 01
      • Flask Demo - 02
      • Flask Demo - 03
      • Flask Demo - 04
      • Flask Demo - 06
    • API Testing
    • Flask Demo Testing
    • API Performance
    • API in Big Data World
  • NoSQL
    • Types of NoSQL Databases
    • Redis
      • Overview
      • Terms to know
      • Redis - (RDBMS) MySql
      • Redis Cache Demo
      • Use Cases
      • Data Structures
        • Strings
        • List
        • Set
        • Hash
        • Geospatial Index
        • Pub/Sub
        • Redis - Python
      • Redis JSON
      • Redis Search
      • Persistence
      • Databases
      • Timeseries
    • Neo4J
      • Introduction
      • Neo4J Terms
      • Software
      • Neo4J Components
      • Hello World
      • Examples
        • MySQL: Neo4J
        • Sample Transactions
        • Sample
        • Create Nodes
        • Update Nodes
        • Relation
        • Putting it all together
        • Commonly used Functions
        • Data Profiling
        • Queries
        • Python Scripts
      • More reading
    • MongoDB
      • Sample JSON
      • Introduction
      • Software
      • MongoDB Best Practices
      • MongoDB Commands
      • Insert Document
      • Querying MongoDB
      • Update & Remove
      • Import
      • Logical Operators
      • Data Types
      • Operators
      • Aggregation Pipeline
      • Further Reading
      • Fun Task
        • Sample
    • InfluxDB
      • Data Format
      • Scripts
  • Python
    • Python Classes
    • Serialization-Deserialization
  • Tools
    • JQ
    • DUCK DB
    • CICD Intro
    • CICD Tools
      • CI YAML
      • CD Yaml
    • Containers
      • VMs or Containers
      • What container does
      • Podman
      • Podman Examples
  • Cloud Everywhere
    • Overview
    • Types of Cloud Services
    • Challenges of Cloud Computing
    • High Availability
    • Azure Cloud
      • Services
      • Storages
      • Demo
    • Terraform
  • Data Engineering
    • Batch vs Streaming
    • Kafka
      • Introduction
      • Kafka Use Cases
      • Kafka Software
      • Python Scripts
      • Different types of Streaming
    • Quality & Governance
    • Medallion Architecture
    • Data Engineering Model
    • Data Mesh
  • Industry Trends
    • Roadmap - Data Engineer
    • Good Reads
      • IP & SUBNET
Powered by GitBook
On this page
  1. Big Data Overview
  2. Scaling

Optimistic concurrency

Optimistic concurrency is a strategy used in databases and distributed systems to handle concurrent access to shared resources, like a dataset, without requiring locks. Instead of locking resources, optimistic concurrency relies on detecting conflicting changes made by multiple processes or users and resolving them when necessary.

In Spark and Databricks, optimistic concurrency can be applied when dealing with Delta Lake tables, a storage layer built on Apache Spark that provides ACID transactions and other data management capabilities.

Here's a simple example to illustrate optimistic concurrency in Spark Databricks using Delta Lake:

Let's say you have a Delta Lake table called "inventory" with the following schema and data:

| item_id| item_nm | stock  |
+--------+--------+--------+
| 1      | Apple  | 10     |
| 2      | Orange | 20     |
| 3      | Banana | 30     |
+--------+--------+--------+

Imagine two users, UserA and UserB, trying to update the apple stock simultaneously.

User A's update:

UPDATE inventory SET stock = stock + 5 WHERE item_id = 1;

User B's update:

UPDATE inventory SET stock = stock - 3 WHERE item_id = 1;

Using optimistic concurrency, both User A, and User B can execute their updates without waiting for the other to complete. However, after both updates are executed, the system checks for conflicts.

There are no conflicts in this case because the updates are not dependent on each other. So, the final stock of Apples would be 12 (10 + 5 - 3). If there were conflicts, the system would throw an exception, and one of the users would have to retry their transaction.

Optimistic concurrency is beneficial in scenarios where conflicts are rare and lock-based approaches might lead to performance degradation. Allowing concurrent updates without locking can improve throughput and responsiveness in many multi-user and distributed applications.

PreviousCAP TheoremNextEventual consistency

Last updated 1 year ago