Home NewsX Lesson Learned #510: Using CProfiler to Analyze Python Call Performance in Support Scenarios

Lesson Learned #510: Using CProfiler to Analyze Python Call Performance in Support Scenarios

by info.odysseyx@gmail.com
0 comment 1 views


While working on a support case last week, a customer encountered a performance issue with a Python application.

After some research, I decided to make an offer. CProfiler Identify the most time-consuming function calls and use them as a starting point for troubleshooting.

Therefore, profiling the Python code was essential to pinpoint the bottleneck. I suggested using CProfilerA built-in Python module that helps you profile your code and identify performance issues in real time.

But first I ran some examples with my test code to see the results.

In this code snippet:

  • RunCommandTimeout A function that performs a command timeout for a SQL database.
  • cprofile.profile() Start profiling your code in context.
  • After running, pstats.Stats (Profile) It helps you visualize your most time-consuming calls by sorting them by cumulative time.
import cProfile
import pstats

with cProfile.Profile() as Profile:    
        RunCommandTimeout(initial_timeout=1, loop_count=5, retry=True, retry_count=3, retry_increment=4)
        results = pstats.Stats(Profile)
        results.strip_dirs().sort_stats('cumulative').print_stats(10)
         

I was able to track the exact function calls and the time spent on each. for example,

  • function call And the frequency
  • time taken In each function (including subcalls)
  • Cumulative time per functionThis is useful for discovering bottlenecks.
   Ordered by: cumulative time
   List reduced from 181 to 10 due to restriction <10>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.357    0.357   84.706   84.706 GiveNotesPerformance.py:240(RunCommandTimeout)
        7   54.108    7.730   54.108    7.730 {method 'execute' of 'pyodbc.Cursor' objects}
        3   27.001    9.000   27.001    9.000 {built-in method time.sleep}
        4    0.001    0.000    3.025    0.756 GiveNotesPerformance.py:63(ConnectToTheDB)
        4    2.945    0.736    2.951    0.738 {built-in method pyodbc.connect}
        1    0.209    0.209    0.209    0.209 {method 'close' of 'pyodbc.Connection' objects}
       29    0.035    0.001    0.035    0.001 {method 'flush' of '_io.TextIOWrapper' objects}
       12    0.000    0.000    0.032    0.003 GiveNotesPerformance.py:526(set_text_color)
        4    0.001    0.000    0.030    0.007 GiveNotesPerformance.py:39(get_credentials_from_file)
        4    0.027    0.007    0.027    0.007 {built-in method io.open}

Analyzing (sorting) profiling results for Python code execution accumulated timeI was able to find the following results:

  • I found the following very interesting thing: How to run Called 7 times for a total of 54 seconds. This indicates that the run method is responsible for most of the time spent on this execution. Each call takes an average of 7.73 seconds, indicating that database queries are the main bottleneck.
  • Besides my time.sleep The function shows that it has been called to introduce a delay as part of the retry mechanism built into the RunCommandTimeout function. Each sleep call lasted an average of 9 seconds.

take pleasure in!





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX