Welcome to

Kirthi's Tech Sphere

My Interests Are Math Computers Problem Solving Algorithms Teaching Music Sports

About Me

Experienced in the field of data engineering, data analytics, and software development, with a strong track record of delivering successful projects and driving growth. Have a reputation for being results-driven, well-organized, and detail-oriented. Areas of expertise also includes advanced machine learning and data visualization.


Have experience in product management and design, as well as training and mentoring product teams. Extensive experience with Amazon Web Services (AWS) and proficient in various technologies, including Cloudera Hadoop, Apache Spark, Python, Java, and more.


Education
Masters degree in Computer Science from University of Maryland and BS and MS from IIT Delhi in Mathematics and Computer Science. Recipient of President's Gold Medal at IIT in the year 1982.

Additional
Published book titled "Mastering Python Data Visualization" in 2015 with Packt Publishing, London.
Buy From https://www.amazon.com/Mastering-Python-Visualization-Kirthi-Raman/dp/1783988320
Member of Mathematics Stack Exchange with a reputation of 7,484 (top 4%) and 36 badges.
(view https://math.stackexchange.com/users/25538/kirthi-raman)
Cloud Experience
ETL Work using Lambda and Apache Glue.
Hive UDF to create User Defined Function. Cloudwatch Metrics and Visualization Dashboards.

Create Data Catalog by extracting data from ElasticSearch in programming language Python.
Distributed Computing
Apache Spark and Dataframes using Spark SQL and Streaming. Combining Python and Hive in a distributed environment for efficient ETL.   Creating Frameworks for UI Tables and Customized Visualizations.

Writing template updates, bulk re-indexing and de-duping for ElasticSearch in Python.
Cloud Tools
  • Amazon EC2
  • Amazon DynamoDB
  • Amazon SageMaker
  • Amazon RDS
  • Amazon EMR
  • Amazon TextExtract
  • Amazon Memory DB for Redis
  • AWS Lambda
  • AWS Elastic Beanstalk
  • AWS Cloudwatch
  • AWS Glue
  • AWS Amplify and Firebase
Work Experience
Leidos, Reston VA
Principal Data Scientist/Engineer, 2019-Current

Developed data tools, algorithms, UI Frameworks to monitor and improve business performance using Srpring/Java Framework, Python/Fastapi. Served as a technical lead on large, complex architecture that involved 30-50 Terrabytes of data in ElasticSearch. Working across multiple teams in supporting various algorithms to boost performance and handle high volumes of data efficiently.


Neustar Inc, Sterling VA
Senior Manager: Data Engineering, 2013-2018

Devised and executed sustainable, data-driven solutions using cloud and data technologies, creating and deploying end-to-end systems. Conducted code reviews, verified quality control processes, and ensured optimal performance. Orchestrated large-scale Cross Device and Match Testing and Integration (CDMTI) functions, generating an growth of $5M per year. Automated ETL process for select customers, resulting in accelerated CDMTI functions, reduced errors, and improved operational efficiency. Technologies include Cloudera Hadoop, Apache Spark, Hive, Python, D3.js, JavaScript, Jenkins, JIRA Confluence, GIT Repository, Java, Scala, R, Scikit-Learn.


Quotient Inc, Columbia MD
Principal Consultant, 2003-2013

Navigated software engineering product development, overseeing continuous improvement activities and establishing standards and best practices. Designed control system architecture, created user interfaces, and administered key tools. Led a diverse team in synthesizing clinical and FDA data for a MapReduce system, identifying productivity and improvement areas for hospitals. Collaborated with technical specialists to standardize data across its lifecycle through development and governance. Technologies used: Cloudera Hadoop, Hive, Python, D3.js, JavaScript, JIRA Confluence, GIT Repository, Java, Scikit-Learn. Applied Regression, Random Forest, Text Mining, and Natural Language Processing techniques.


Longitude Systems, Chantilly VA
Product Manager, 1999-2003

Played a critical role in managing a team of engineers in the development of multiple prducts For Provisioning System, targeting the ISP market. Designed and administered a proof of concept for venture capital purposes, successfully securing a first round of $10M funding. Contributed to the recruitment of high-performing engineers. This startup was sold to a third party software company in 2003.


Independent
Consultant, 1993-1999

Involved in the early implementation of a Search Engine that was based on a research paper at UCBerkeley.


NITTR, Chandigarh (India)
Lecturer (Assistant Professor), 1984-1989

Taught and Trained Regional College Teachers with a new curriculum for Computer Science, a time when the field of Computer Science just began in the country. While teaching at NITTR, was a recipient of a fellowship involving a study trip to Digital Equipment Corporation, Boston and MIT Labs.


Technology


Interesting technologies in the 'Cloud' that stand-out are many, but to name a few that I am interested are:


NLP
NLP is the process through which AI is taught to understand the rules and syntax of language, programmed to develop complex algorithms to represent those rules, and then made to use those algorithms to carry out specific tasks. These tasks can include:
  • Language generation: AI apps generate new text based on given prompts or contexts, such as generating text for chatbots, virtual assistants, or even creative writing.
  • Answering questions: AI apps respond to users who've asked a question in natural language on a specific topic.
  • Sentiment analysis: AI apps analyze text to determine the sentiment or emotional tone of the writer, such as whether the text expresses a positive, negative, or neutral sentiment.
  • Text classification: AI classifies text into different categories or topics, such as categorizing news articles into politics, sports, or entertainment.
  • Machine translation: AI translates text from one language to another, such as from English to Spanish.

Speech Recognition
Speech recognition, a groundbreaking technology in the realm of human-computer interaction, allows machines to interpret and convert spoken language into written text. This technology has seen significant advancements over the years, driven by machine learning techniques such as deep learning and neural networks. From voice assistants like Siri and Google Assistant to transcription services and accessibility tools, speech recognition has found applications in various domains. How Speech Recognition Works? The process of speech recognition involves several stages:
  • Acoustic Signal Processing: The input audio signal is transformed into a format that can be analyzed. This involves breaking down the audio into smaller segments called frames.
  • Feature Extraction: Features like Mel-Frequency Cepstral Coefficients (MFCCs) are extracted from each frame. These features highlight the relevant characteristics of the audio signal for subsequent analysis.
  • Acoustic Modeling: A trained acoustic model, often based on neural networks, learns to map the extracted features to phonemes or sub-word units. This helps in identifying the phonetic content of the speech.
  • Language Modeling: Language models provide context and help in understanding the sequence of words. These models consider the probability of word combinations and help in selecting the most likely words given the context.
  • Decoding: Using the acoustic and language models, the system decodes the most probable sequence of words that corresponds to the spoken input.

Example Code in Python using SpeechRecognition Library

Here's a simple example of speech recognition using the SpeechRecognition library in Python. Before running this code, make sure you have the library installed (pip install SpeechRecognition).

import speech_recognition as sr

# Create a recognizer object
recognizer = sr.Recognizer()

# Capture audio from the microphone
with sr.Microphone() as source:
    print("Say something...")
    audio = recognizer.listen(source)

# Perform speech recognition
try:
    text = recognizer.recognize_google(audio)
    print("You said:", text)
except sr.UnknownValueError:
    print("Sorry, I couldn't understand.")
excepti sr.RequestError as e:
    print("Error fetching results; {0}".format(e))


In this example, the code captures audio from the microphone, processes it using Google's speech recognition service, and then prints the recognized text. However, various other engines and models can be used with the SpeechRecognition library.


Challenges and Future Directions

While speech recognition has made impressive strides, challenges remain, such as handling accents, noisy environments, and complex sentence structures. Ongoing research focuses on improving accuracy and expanding language support.


As technology evolves, speech recognition is expected to play an integral role in enabling more intuitive human-computer interaction, making devices and applications more accessible and user-friendly for everyone.


Machine Learning
Machine Learning: Unleashing Intelligence Through Data
Machine Learning (ML) is a transformative field of artificial intelligence that empowers computers to learn from data and improve their performance over time. Instead of being explicitly programmed to perform tasks, machines use algorithms to learn patterns from data and make informed decisions or predictions. ML has found applications across various domains, from healthcare and finance to image recognition and recommendation systems.

Types of Machine Learning
  1. Supervised Learning: In this approach, the model is trained on a labeled dataset where the input data is paired with the correct output. The model learns to make predictions by generalizing from the training data. Examples include image classification, spam detection, and sentiment analysis.
  2. Unsupervised Learning: Unsupervised learning deals with unlabeled data. The model identifies patterns and structures within the data without explicit guidance. Clustering and dimensionality reduction are common tasks in unsupervised learning.
  3. Reinforcement Learning: In reinforcement learning, an agent interacts with an environment and learns by receiving feedback in the form of rewards or penalties. The agent aims to maximize the cumulative reward over time. This is often used in robotics, game playing, and autonomous systems.

Machine Learning Examples
  1. Image Classification with Convolutional Neural Networks (CNNs): CNNs are a type of neural network designed for image processing. They have revolutionized tasks like image classification, object detection, and facial recognition. For instance, a CNN can be trained to classify images of animals, distinguishing between cats and dogs.
  2. Natural Language Processing (NLP) with Recurrent Neural Networks (RNNs): RNNs are used for sequence data, making them suitable for language-related tasks. Sentiment analysis, machine translation, and text generation are examples of NLP applications. A sentiment analysis model could classify movie reviews as positive or negative based on their content.
  3. Recommendation Systems with Collaborative Filtering: Recommendation systems suggest items to users based on their preferences and behaviors. Collaborative filtering is a technique where the system recommends items based on the preferences of similar users. For instance, platforms like Netflix use collaborative filtering to suggest movies or shows to users.


Example Code in Python for Linear Regression

Linear regression is a simple yet powerful technique in supervised learning. It's used to predict a continuous output variable based on one or more input features.
import numpy as np
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt

# Sample data
X = np.array([1, 2, 3, 4, 5]).reshape(-1, 1)
y = np.array([2, 4, 5, 4, 5])

# Create a linear regression model
model = LinearRegression()

# Train the model
model.fit(X, y)

# Make predictions
predictions = model.predict(X)

# Plot the data and the regression line
plt.scatter(X, y, label='Data')
plt.plot(X, predictions, color='red', label='Regression Line')
plt.xlabel('Input')
plt.ylabel('Output')
plt.legend()
plt.show()

In this example, the code uses the scikit-learn library to create a linear regression model, train it on sample data, and visualize the data points along with the regression line.

Future of Machine Learning

The future of machine learning is promising, with advances in deep learning, reinforcement learning, and interpretability. The integration of ML in various industries is expected to drive innovation and solve complex problems by harnessing the power of data-driven intelligence.

Music and Artificial Intelligence
Neural Network Algorithms to Separate Vocals

Vocal removal using deep learning is a technique that aims to separate the vocals (singing or spoken words) from the background music or instrumental parts of an audio track. This process is also known as "vocal isolation" or "karaoke track extraction." Deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), are commonly used to perform this task. Here's a simplified explanation of how vocal removal using deep learning works:


  1. Data Collection: To train a deep learning model for vocal removal, a large dataset of audio tracks is required. This dataset should include songs with vocals and their corresponding instrumental versions or isolated vocal tracks. This dataset is used for supervised learning, where the model learns to differentiate between vocals and instrumental music.
  2. Preprocessing: The audio data is preprocessed to convert it into a suitable format for deep learning. This typically involves converting the audio waveform into a spectrogram, which is a 2D representation of the audio signal's frequency content over time. The spectrogram is divided into smaller segments for processing.
  3. Neural Network Architecture: A deep learning model, often a neural network, is designed to take the spectrogram segments as input and predict whether each segment contains vocals or not. Common architectures used for this task include CNNs and RNNs, sometimes combined into a hybrid model.
  4. Training: The model is trained on the labeled dataset, where the target labels indicate the presence or absence of vocals in each spectrogram segment. The model's parameters are adjusted during training using optimization techniques like gradient descent to minimize the prediction error.
  5. Inference: Once the model is trained, it can be used for vocal removal on new audio tracks. During inference, the audio track is divided into spectrogram segments, and the model predicts whether each segment contains vocals or instrumental music.
  6. Vocal Removal: The predicted vocal and instrumental segments are then separated based on the model's predictions. The vocal segments can be attenuated or removed from the original audio track, leaving behind the instrumental parts. Several techniques can be used to accomplish this, such as spectral subtraction or mask-based processing.
  7. Post-processing: To obtain a clean vocal or instrumental track, additional post-processing techniques can be applied to the separated components, such as filtering, smoothing, or filling in gaps. For post-processing  audacity is the most popular tool that is recommended (and it is free).

It is important to note that vocal removal using deep learning is a challenging task, and the quality of the results may vary depending on factors like the model's architecture, training data quality, and the complexity of the audio tracks. While deep learning can be effective at isolating vocals, it may not achieve perfect results in all cases, and some artifacts or residual vocals may remain in the separated tracks.


Sample Vocals Extracted

How good are the results? I tried a handful of hindi songs, and almost all of them performed incredibly well compared to other solutions. To avoid copyright infringement issues, I will just provide little over a minute samples of each song. Below are vocal versions of three samples I ran through the two-stem (instrumental/vocal) filter.

Song - 1
Humne To Dil Ko Aapke (Rafi, Asha) – Original Song

Humne To Dil Ko Aapke (Rafi, Asha) – Vocals

Humne To Dil Ko Aapke (Rafi, Asha) – Instrumental
Song - 2
Tum_Jo_Huye_Mere_Humsafar (Rafi, Geeta Dutt) – Original Song

Tum_Jo_Huye_Mere_Humsafar (Rafi, Geeta Dutt) – Vocals

Tum_Jo_Huye_Mere_Humsafar (Rafi, Geeta Dutt) – Instrumental
Song - 3
Hum Aur Tum Aur Ye Samaah (Rafi) – Original Song

Hum Aur Tum Aur Ye Samaah (Rafi) – Vocals

Hum Aur Tum Aur Ye Samaah (Rafi) – Instrumental

What I love about this capability is that, by listening to vocals separately I get to learn to sing better.



Commercial Tools for Vocal Remover

  • PhonicMind (https://phonicmind.com)
  • Lalal (https://lalal.ai)
There are several more commercial tools, but the most accurate and low-cost one is Lalal.ai


Open Source Tool

Spleeter is an source separation library with pretrained models written in Python and uses Tensorflow.


To utilize either these tools or the open-source software Spleeter, your initial step involves obtaining an MP3 version of the song track, which you can acquire from sources like YouTube or other available options. While there are websites that can perform downloads for you, it's important to note that many of these sites may be susceptible to viruses. If you possess programming skills, particularly in Python, you can use the following code snippet to accomplish this task.

from pytube import YouTube 
import argparse, sys,os

# Parser arguments shown  below
parser=argparse.ArgumentParser()
parser.add_argument("--url", help="youtube url")
parser.add_argument("--format", help="format is mp3 or mp4")

args=parser.parse_args()
print(f"Arguments url: {args.url} format: {args.format}")

# Assume the output is in folder downloadarea
folderpath="./downloadarea"
yt = YouTube(args.url)
if args.format == "mp4":
    video = yt.streams.filter(only_audio=False).first()
    folderpath += "/mp4s"
else:
    video = yt.streams.filter(only_audio=True).first()

out_file = video.download(output_path=folderpath)
base, ext = os.path.splitext(out_file)
new_file = base + '.'+ args.format
new_file = new_file.replace(' ','_')  # replace blank chars in filename
os.rename(out_file, new_file)  # default format on mac is 3ggp (eggp)

# result of success
print(yt.title + " has been successfully downloaded."

In order to get this code to work, you need pytube python library, specially the version 15.0.0 or later.


pip install pytube==15.0.0

Follow these steps to install Spleeter.


To install spleeter using pip


pip install spleeter

Collecting spleeter
  Downloading spleeter-2.4.0-py3-none-any.whl (49 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 49.4/49.4 KB 1.3 MB/s eta 0:00:00
Collecting pandas<2.0.0,>=1.3.0
  Using cached pandas-1.5.3-cp39-cp39-macosx_10_9_x86_64.whl (12.0 MB)
Collecting tensorflow<2.10.0,>=2.5.0
  Downloading tensorflow-2.9.3-cp39-cp39-macosx_10_14_x86_64.whl (228.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 228.6/228.6 MB 4.9 MB/s eta 0:00:00
Collecting norbert<0.3.0,>=0.2.1
  Using cached norbert-0.2.1-py2.py3-none-any.whl (11 kB)
Collecting httpx[http2]<0.20.0,>=0.19.0
  Downloading httpx-0.19.0-py3-none-any.whl (77 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 77.3/77.3 KB 2.3 MB/s eta 0:00:00
Collecting ffmpeg-python<0.3.0,>=0.2.0
  Using cached ffmpeg_python-0.2.0-py3-none-any.whl (25 kB)
Collecting typer<0.4.0,>=0.3.2
  Downloading typer-0.3.2-py3-none-any.whl (21 kB)
Collecting future
  Downloading future-0.18.3.tar.gz (840 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 840.9/840.9 KB 8.2 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting httpcore<0.14.0,>=0.13.3
  Downloading httpcore-0.13.7-py3-none-any.whl (58 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.8/58.8 KB 1.9 MB/s eta 0:00:00
Collecting sniffio
  Downloading sniffio-1.3.0-py3-none-any.whl (10 kB)
Collecting certifi
  Downloading certifi-2023.7.22-py3-none-any.whl (158 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 158.3/158.3 KB 3.3 MB/s eta 0:00:00
Collecting rfc3986[idna2008]<2,>=1.3
  Downloading rfc3986-1.5.0-py2.py3-none-any.whl (31 kB)
Collecting charset-normalizer
  Downloading charset_normalizer-3.3.2-cp39-cp39-macosx_10_9_x86_64.whl (122 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 122.5/122.5 KB 2.3 MB/s eta 0:00:00
Collecting h2<5,>=3
  Using cached h2-4.1.0-py3-none-any.whl (57 kB)
Collecting scipy
  Downloading scipy-1.11.3-cp39-cp39-macosx_10_9_x86_64.whl (37.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 37.3/37.3 MB 6.5 MB/s eta 0:00:00
Collecting pytz>=2020.1
  Downloading pytz-2023.3.post1-py2.py3-none-any.whl (502 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 502.5/502.5 KB 5.6 MB/s eta 0:00:00
Collecting numpy>=1.20.3
  Downloading numpy-1.26.1-cp39-cp39-macosx_10_9_x86_64.whl (20.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.6/20.6 MB 6.3 MB/s eta 0:00:00
Collecting python-dateutil>=2.8.1
  Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting google-pasta>=0.1.1
  Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Collecting wrapt>=1.11.0
  Downloading wrapt-1.15.0-cp39-cp39-macosx_10_9_x86_64.whl (35 kB)
Collecting keras-preprocessing>=1.1.1
  Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Collecting termcolor>=1.1.0
  Using cached termcolor-2.3.0-py3-none-any.whl (6.9 kB)
Collecting absl-py>=1.0.0
  Downloading absl_py-2.0.0-py3-none-any.whl (130 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 130.2/130.2 KB 3.4 MB/s eta 0:00:00
Collecting flatbuffers<2,>=1.12
  Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
Collecting tensorflow-estimator<2.10.0,>=2.9.0rc0
  Using cached tensorflow_estimator-2.9.0-py2.py3-none-any.whl (438 kB)
Collecting h5py>=2.9.0
  Downloading h5py-3.10.0-cp39-cp39-macosx_10_9_x86_64.whl (3.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 8.3 MB/s eta 0:00:00
Collecting six>=1.12.0
  Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting grpcio<2.0,>=1.24.3
  Downloading grpcio-1.59.2-cp39-cp39-macosx_10_10_universal2.whl (9.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.6/9.6 MB 7.9 MB/s eta 0:00:00
Collecting keras<2.10.0,>=2.9.0rc0
  Using cached keras-2.9.0-py2.py3-none-any.whl (1.6 MB)
Collecting gast<=0.4.0,>=0.2.1
  Using cached gast-0.4.0-py3-none-any.whl (9.8 kB)
Requirement already satisfied: setuptools in 
  /Users/kirthiraman/kirthi/research/songs/python3env/lib/python3.9/site-packages 
  (from tensorflow<2.10.0,>=2.5.0->spleeter) (58.1.0)
Collecting opt-einsum>=2.3.2
  Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting packaging
  Downloading packaging-23.2-py3-none-any.whl (53 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 53.0/53.0 KB 1.7 MB/s eta 0:00:00
Collecting typing-extensions>=3.6.6
  Downloading typing_extensions-4.8.0-py3-none-any.whl (31 kB)
Collecting libclang>=13.0.0
  Downloading libclang-16.0.6-py2.py3-none-macosx_10_9_x86_64.whl (24.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.5/24.5 MB 6.9 MB/s eta 0:00:00
Collecting astunparse>=1.6.0
  Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting tensorflow-io-gcs-filesystem>=0.23.1
  Downloading tensorflow_io_gcs_filesystem-0.34.0-cp39-cp39-macosx_10_14_x86_64.whl (1.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 5.6 MB/s eta 0:00:00
Collecting protobuf<3.20,>=3.9.2
  Downloading protobuf-3.19.6-cp39-cp39-macosx_10_9_x86_64.whl (980 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 980.5/980.5 KB 7.5 MB/s eta 0:00:00
Collecting tensorboard<2.10,>=2.9
  Downloading tensorboard-2.9.1-py3-none-any.whl (5.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.8/5.8 MB 6.3 MB/s eta 0:00:00
Collecting click<7.2.0,>=7.1.1
  Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 82.8/82.8 KB 2.2 MB/s eta 0:00:00
Collecting wheel<1.0,>=0.23.0
  Using cached wheel-0.41.3-py3-none-any.whl (65 kB)
Collecting hpack<5,>=4.0
  Using cached hpack-4.0.0-py3-none-any.whl (32 kB)
Collecting hyperframe<7,>=6.0
  Using cached hyperframe-6.0.1-py3-none-any.whl (12 kB)
Collecting anyio==3.*
  Downloading anyio-3.7.1-py3-none-any.whl (80 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 80.9/80.9 KB 2.6 MB/s eta 0:00:00
Collecting h11<0.13,>=0.11
  Downloading h11-0.12.0-py3-none-any.whl (54 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.9/54.9 KB 1.7 MB/s eta 0:00:00
Collecting exceptiongroup
  Downloading exceptiongroup-1.1.3-py3-none-any.whl (14 kB)
Collecting idna>=2.8
  Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting tensorboard-plugin-wit>=1.6.0
  Using cached tensorboard_plugin_wit-1.8.1-py3-none-any.whl (781 kB)
Collecting requests<3,>=2.21.0
  Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting werkzeug>=1.0.1
  Downloading werkzeug-3.0.1-py3-none-any.whl (226 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 226.7/226.7 KB 3.8 MB/s eta 0:00:00
Collecting google-auth<3,>=1.6.3
  Downloading google_auth-2.23.4-py2.py3-none-any.whl (183 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 183.3/183.3 KB 4.2 MB/s eta 0:00:00
Collecting markdown>=2.6.8
  Downloading Markdown-3.5.1-py3-none-any.whl (102 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 102.2/102.2 KB 3.5 MB/s eta 0:00:00
Collecting tensorboard-data-server<0.7.0,>=0.6.0
  Using cached tensorboard_data_server-0.6.1-py3-none-macosx_10_9_x86_64.whl (3.5 MB)
Collecting google-auth-oauthlib<0.5,>=0.4.1
  Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB)
Collecting cachetools<6.0,>=2.0.0
  Downloading cachetools-5.3.2-py3-none-any.whl (9.3 kB)
Collecting pyasn1-modules>=0.2.1
  Using cached pyasn1_modules-0.3.0-py2.py3-none-any.whl (181 kB)
Collecting rsa<5,>=3.1.4
  Using cached rsa-4.9-py3-none-any.whl (34 kB)
Collecting requests-oauthlib>=0.7.0
  Using cached requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB)
Collecting importlib-metadata>=4.4
  Downloading importlib_metadata-6.8.0-py3-none-any.whl (22 kB)
Collecting urllib3<3,>=1.21.1
  Downloading urllib3-2.0.7-py3-none-any.whl (124 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 124.2/124.2 KB 3.6 MB/s eta 0:00:00
Collecting MarkupSafe>=2.1.1
  Downloading MarkupSafe-2.1.3-cp39-cp39-macosx_10_9_x86_64.whl (13 kB)
Collecting zipp>=0.5
  Downloading zipp-3.17.0-py3-none-any.whl (7.4 kB)
Collecting pyasn1<0.6.0,>=0.4.6
  Using cached pyasn1-0.5.0-py2.py3-none-any.whl (83 kB)
Collecting oauthlib>=3.0.0
  Using cached oauthlib-3.2.2-py3-none-any.whl (151 kB)
Using legacy 'setup.py install' for future, since package 'wheel' is not installed.
Installing collected packages: tensorboard-plugin-wit, rfc3986, pytz, libclang, keras, 
flatbuffers, zipp, wrapt, wheel, urllib3, typing-extensions, termcolor, 
tensorflow-io-gcs-filesystem, tensorflow-estimator, tensorboard-data-server, 
sniffio, six, pyasn1, protobuf, packaging, oauthlib, numpy, MarkupSafe, idna, 
hyperframe, hpack, h11, grpcio, gast, future, exceptiongroup, click, charset-normalizer, 
certifi, cachetools, absl-py, werkzeug, typer, scipy, rsa, requests, python-dateutil, 
pyasn1-modules, opt-einsum, keras-preprocessing, importlib-metadata, h5py, h2, 
google-pasta, ffmpeg-python, astunparse, anyio, requests-oauthlib, pandas, norbert, 
markdown, httpcore, google-auth, httpx, google-auth-oauthlib, tensorboard, tensorflow, spleeter
  Running setup.py install for future ... done
Successfully installed MarkupSafe-2.1.3 absl-py-2.0.0 anyio-3.7.1 astunparse-1.6.3 cachetools-5.3.2 
certifi-2023.7.22 charset-normalizer-3.3.2 click-7.1.2 exceptiongroup-1.1.3 ffmpeg-python-0.2.0 
flatbuffers-1.12 future-0.18.3 gast-0.4.0 google-auth-2.23.4 google-auth-oauthlib-0.4.6 
google-pasta-0.2.0 grpcio-1.59.2 h11-0.12.0 h2-4.1.0 h5py-3.10.0 hpack-4.0.0 httpcore-0.13.7 
httpx-0.19.0 hyperframe-6.0.1 idna-3.4 importlib-metadata-6.8.0 keras-2.9.0 
keras-preprocessing-1.1.2 libclang-16.0.6 markdown-3.5.1 norbert-0.2.1 numpy-1.26.1 
oauthlib-3.2.2 opt-einsum-3.3.0 packaging-23.2 pandas-1.5.3 protobuf-3.19.6 pyasn1-0.5.0 
pyasn1-modules-0.3.0 python-dateutil-2.8.2 pytz-2023.3.post1 requests-2.31.0 requests-oauthlib-1.3.1 
rfc3986-1.5.0 rsa-4.9 scipy-1.11.3 six-1.16.0 sniffio-1.3.0 spleeter-2.4.0 tensorboard-2.9.1 
tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorflow-2.9.3 tensorflow-estimator-2.9.0 
tensorflow-io-gcs-filesystem-0.34.0 termcolor-2.3.0 typer-0.3.2 typing-extensions-4.8.0 urllib3-2.0.7 
werkzeug-3.0.1 wheel-0.41.3 wrapt-1.15.0 zipp-3.17.0

In order to see what options does spleeter have or how to separate vocals try


spleeter --help
The response you should expect is
Usage: spleeter [OPTIONS] COMMAND [ARGS]...

Options:
  --version  Return Spleeter version
  --help     Show this message and exit.

Commands:
  evaluate  Evaluate a model on the musDB test dataset
  separate  Separate audio file(s)
  train     Train a source separation model


Now let us say, you have a folder mymp3songs where you have downloaded songs from mp3 that needs to used for vocals separation. You might also want to have an output folder where spleeter puts vocals and instrument files for your songs.


Example: I have a song called Humne_To_Dil_Ko_SHORTSONG.mp3 in my folder and want to have the vocals separated.
spleeter separate mymp3s/Humne_To_Dil_Ko_SHORTSONG.mp3 -o output

will separate the vocals and instrument into separate mp3 file and save it in output folder.

   

English Songs (My Playlist)


Name Album Singer Time Link
   

Hindi Songs (Not My Playlist)


Name Singer(s) Type Music Director Link

You Are In

Sport's Quest Zone

Questions About Basketball American Football




NBA Quest Zone will be available soon.