Skip to main content

Applicants

Description

Get data for page home dashboard and return data summary applicant user by status. This query can be simplified with a single query instead of multiple queries.

Optimizing Status

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/applicantuser/summary
  • Method: GET
  • Permission: get-summary-applicant-user
  • Possible Role Type: SUPER_ADMIN, USER_ADMIN, INTERNAL

Response

{
"data": {
"total": <int>, // total all applicants in database
"total_enable": <int>, // total enabled applicants
"total_disable": <int> // total disabled applicants
},
"message": "Retrieved successfully"
}

1. Get all applicants

a. The query

select count(*) as aggregate from `users` where `user_type` = 'APP';

b. Explain Query

-> Aggregate: count(0)  (cost=635 rows=1) (actual time=1.65..1.65 rows=1 loops=1)
-> Filter: (users.user_type = 'APP') (cost=592 rows=432) (actual time=0.0531..1.57 rows=1798 loops=1)
-> Covering index scan on users using unique_email_user_type (cost=592 rows=4316) (actual time=0.0504..1.13 rows=4006 loops=1)

c. Query log time consumption

3.7 ms

2. Get enabled applicants

a. The query

select count(*) as aggregate from `users` where `user_type` = 'APP' and `is_deleted` = 0 and `status` = 1;

b. Explain Query

-> Aggregate: count(0)  (cost=592 rows=1) (actual time=4.32..4.32 rows=1 loops=1)
-> Filter: ((users.`status` = 1) and (users.is_deleted = 0) and (users.user_type = 'APP')) (cost=592 rows=4.32) (actual time=0.0634..4.23 rows=1765 loops=1)
-> Table scan on users (cost=592 rows=4316) (actual time=0.0474..3.7 rows=4006 loops=1)

c. Query log time consumption

9.77 ms

3. Get disabled applicants

a. The query

select count(*) as aggregate from `users` where `user_type` = 'APP' and `is_deleted` = 0 and `status` = 0;

b. Explain Query

-> Aggregate: count(0)  (cost=592 rows=1) (actual time=6.5..6.5 rows=1 loops=1)
-> Filter: ((users.`status` = 0) and (users.is_deleted = 0) and (users.user_type = 'APP')) (cost=592 rows=4.32) (actual time=0.0879..6.49 rows=22 loops=1)
-> Table scan on users (cost=592 rows=4316) (actual time=0.067..6.09 rows=4006 loops=1)

c. Query log time consumption

2.78 ms

Optimizations

1. Add index

Check prod to make sure user_type & status are indexed. It has the same index as HQ Manager below

ALTER TABLE `users` ADD INDEX (`user_type`, `is_deleted`, `status`);

2. Optimize query

select count(id) as aggregate, status
from `users`
where
`user_type` = 'APP'
and `is_deleted` = 0
group by status;

3. Explain query

-> Group aggregate: count(users.id)  (cost=441 rows=61.8) (actual time=0.0271..0.878 rows=2 loops=1)
-> Covering index lookup on users using user_type (user_type='APP', is_deleted=0) (cost=263 rows=1787) (actual time=0.014..0.801 rows=1787 loops=1)

4. Query log time consumption

9.24 ms

5. Conclusion

  • Aggregate queries with GROUP BY are usually better in terms of efficiency, resource consumption, and scalability, especially for large tables.
  • If there are a lot of operations performed on the user_type and is_deleted columns, this index will be very helpful in speeding up other queries that utilize similar filters.
  • Status field is to indicate active or inactive user. so if we need get all APP user, we fetch user with flag is_deleted = 0 only.

Implementation into code.

Update App\Http\Controllers\ApplicantUserController::applicantUserSummaryByStatus

Without cache

public function applicantUserSummaryByStatus(SummaryApplicantUserByStatusRequest $request)
{
$result = $this->userRepository
->select(DB::raw('count(*) as aggregate'), 'status')
->where('user_type', 'APP')
->where('is_deleted', 0)
->groupBy('status')
->get();

$total = 0;
$totalEnable = 0;
$totalDisable = 0;
foreach ($result as $row) {
$total += $row->aggregate;
if ($row->status == 1) {
$totalEnable += $row->aggregate;
} else {
$totalDisable += $row->aggregate;
}
}

return response()->json([
'data' => [
'total' => $total,
'total_enable' => $totalEnable,
'total_disable' => $totalDisable,
],
'message' => __('messages.portal.applicantusers.success.applicantUserSummaryByStatus')
], 200);
}

Summary of optimizations

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Without cache16.25 ms9.24 ms43.14 %---
sidebar_label: 'Available credit total by HQ'

Available credit total by HQ

Description

Available credits total by HQ

Optimizing Status

Already optimized

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/credit/hq/availableCreditsTotalByHq
  • Method: GET
  • Permission: view-available-credits-total-by-hq
  • Possible Role Type: HQ, AREA, LOCATION

Response

{
"data": {
"available_credit_total": <double>,
"url_logo": "<string>",
"location_badge_assignments": []
},
"message": "Retrieved successfully"
}

1. Get sum available credit

a. The query

select ROUND(SUM(available_credits), 2) as available_credit_total
from `locations`
where
`company_id` = 1
and `deleted_at` is null
limit 1

b. Explain Query

-> Limit: 1 row(s)  (cost=79.4 rows=1) (actual time=0.864..0.864 rows=1 loops=1)
-> Aggregate: sum(locations.available_credits) (cost=79.4 rows=1) (actual time=0.863..0.863 rows=1 loops=1)
-> Filter: (locations.deleted_at is null) (cost=76.1 rows=33.1) (actual time=0.0778..0.838 rows=331 loops=1)
-> Index lookup on locations using locations_company_id_foreign (company_id=1) (cost=76.1 rows=331) (actual time=0.0769..0.815 rows=331 loops=1)

c. Query log time consumption

2.04 ms

d. Conclusion

No need for optimization, this query already optimize using index lookup using locations_company_id_foreign--- sidebar_label: 'Company'

Company

Description

Get total number of partner companies.
This query can be simplified with a single query instead of multiple queries.

Optimizing Status

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/company/summary
  • Method: GET
  • Permission: summary-company
  • Possible Role Type: SUPER_ADMIN, USER_ADMIN, INTERNAL, SUPER_HQ_EXTERNAL(depending on company permission)

Response

{
"data": {
"total": <int>, // total companies in database
"total_enable": <int>, // total companies with status = 1
"total_disable": <int> // total companies with status = 0
},
"message": "Retrieved successfully"
}

1. Get all companies

a. The query

select count(*) as aggregate from `companies` where `companies`.`deleted_at` is null;

b. Explain Query

-> Aggregate: count(0)  (cost=157 rows=1) (actual time=1.53..1.53 rows=1 loops=1)
-> Filter: (companies.deleted_at is null) (cost=144 rows=131) (actual time=0.762..1.48 rows=1306 loops=1)
-> Table scan on companies (cost=144 rows=1306) (actual time=0.761..1.36 rows=1306 loops=1)

c. Query log time consumption

7.74 ms

2. Get active companies

a. The query

select count(*) as aggregate from `companies` where `status` = 1 and `companies`.`deleted_at` is null;

b. Explain Query

-> Aggregate: count(0)  (cost=145 rows=1) (actual time=4.46..4.46 rows=1 loops=1)
-> Filter: ((companies.`status` = 1) and (companies.deleted_at is null)) (cost=144 rows=13.1) (actual time=3.98..4.42 rows=1292 loops=1)
-> Table scan on companies (cost=144 rows=1306) (actual time=3.97..4.33 rows=1306 loops=1)

c. Query log time consumption

0.7 ms

3. Get inactive companies

a. The query

select count(*) as aggregate from `companies` where `status` = 0 and `companies`.`deleted_at` is null

b. Explain Query

-> Aggregate: count(0)  (cost=135 rows=1) (actual time=0.532..0.532 rows=1 loops=1)
-> Filter: ((companies.`status` = 0) and (companies.deleted_at is null)) (cost=134 rows=13.1) (actual time=0.041..0.529 rows=14 loops=1)
-> Table scan on companies (cost=134 rows=1306) (actual time=0.0275..0.445 rows=1306 loops=1)

c. Query log time consumption

0.66 ms

Optimizations

1. Add index

ALTER TABLE `jod`.`companies` ADD INDEX (`status`, `deleted_at`);

2. Optimize query

SELECT 
COUNT(*) AS aggregate,
SUM(status = 1) AS aggregate_enable,
SUM(status = 0) AS aggregate_disable
FROM
`companies`
WHERE
`companies`.`deleted_at` IS NULL;

3. Explain query

-> Aggregate: count(0), sum((companies.`status` = 1)), sum((companies.`status` = 0))  (cost=147 rows=1) (actual time=0.445..0.445 rows=1 loops=1)
-> Filter: (companies.deleted_at is null) (cost=134 rows=131) (actual time=0.0197..0.301 rows=1306 loops=1)
-> Covering index scan on companies using companies_summary_idx (cost=134 rows=1306) (actual time=0.0189..0.242 rows=1306 loops=1)

4. Query log time consumption

6.85 ms

5. Conclusion

  1. High Efficiency: The table is only scanned once. This reduces significant overhead compared to multiple queries or queries with GROUP BY.
  2. All required results are available in one row, and there is no need for further grouping or manipulation of the results.
  3. Using cache is highly recommended, since the companies table rarely changes.

Implementation into code.

Update App\Repositories\CompanyRepositoryEloquent::companySummaryByStatus

Without cache

public function companySummaryByStatus()
{
$result = $this->select(
DB::raw('count(*) as aggregate'),
DB::raw('SUM(status = 1) AS aggregate_enable'),
DB::raw('SUM(status = 0) AS aggregate_disable')
)->get();

$total = 0;
$totalEnableCompany = 0;
$totalDisableCompany = 0;
foreach ($result as $row) {
$total += $row->aggregate;
$totalEnableCompany += $row->aggregate_enable;
$totalDisableCompany += $row->aggregate_disable;
}

return [
'total' => $total,
'total_enable' => $totalEnableCompany,
'total_disable' => $totalDisableCompany,
];
}

With cache

public function companySummaryByStatus()
{
// take or rewrite data on cache
$repo = $this;
return Cache::remember("jod_companies", 86400, function () use ($repo) {
$result = $this->select(
DB::raw('count(*) as aggregate'),
DB::raw('SUM(status = 1) AS aggregate_enable'),
DB::raw('SUM(status = 0) AS aggregate_disable')
)->get();

$total = 0;
$totalEnableCompany = 0;
$totalDisableCompany = 0;
foreach ($result as $row) {
$total += $row->aggregate;
$totalEnableCompany += $row->aggregate_enable;
$totalDisableCompany += $row->aggregate_disable;
}

return [
'total' => $total,
'total_enable' => $totalEnableCompany,
'total_disable' => $totalDisableCompany,
];
});
}

Summary of optimizations

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Without cache9.1 ms6.85 ms24.72 %---

Credit chart overview

Description

API get credit overview (Received Credits, Consumed Credits) of last 12 months

Optimizing Status

Contract

  • URL: /api/portal/dashboard/credit-overview
  • Method: GET
  • Permission: get-hiring-overview

Response

{
"data": {
"total_today_hired": "0",
"total_hired_of_current_month": "0",
"hiring_charts": [
{
"total_hired_staffs": "14",
"month_number": 11,
"month": "Nov",
"year": 2023
},
...
]
},
"message": "Retrieved successfully"
}

1. Get consumed credits last 12 months

a. The query

select 
COALESCE(
ROUND(
SUM(
payments.admin_jod_credit
),
2
),
0
) AS consumed_credits,
last_12_months.month_number as month_number,
last_12_months.month as month,
last_12_months.year as year
from `payments`
right join (
SELECT '2023-11' AS year_and_month, 2023 AS year, 'Nov' AS month, 11 AS month_number
union
SELECT '2023-12' AS year_and_month, 2023 AS year, 'Dec' AS month, 12 AS month_number
union
SELECT '2024-01' AS year_and_month, 2024 AS year, 'Jan' AS month, 1 AS month_number
union
SELECT '2024-02' AS year_and_month, 2024 AS year, 'Feb' AS month, 2 AS month_number
union
SELECT '2024-03' AS year_and_month, 2024 AS year, 'Mar' AS month, 3 AS month_number
union
SELECT '2024-04' AS year_and_month, 2024 AS year, 'Apr' AS month, 4 AS month_number
union
SELECT '2024-05' AS year_and_month, 2024 AS year, 'May' AS month, 5 AS month_number
union
SELECT '2024-06' AS year_and_month, 2024 AS year, 'Jun' AS month, 6 AS month_number
union
SELECT '2024-07' AS year_and_month, 2024 AS year, 'Jul' AS month, 7 AS month_number
union
SELECT '2024-08' AS year_and_month, 2024 AS year, 'Aug' AS month, 8 AS month_number
union
SELECT '2024-09' AS year_and_month, 2024 AS year, 'Sep' AS month, 9 AS month_number
union
SELECT '2024-10' AS year_and_month, 2024 AS year, 'Oct' AS month, 10 AS month_number
) as last_12_months
on DATE_FORMAT(payments.created_at, "%Y-%m") = `last_12_months`.`year_and_month`
and exists (
select 1
from `jod_jobs`
where
`jod_jobs`.`id` = `payments`.`job_id`
and `jod_jobs`.`company_id` = ?
)
group by `last_12_months`.`year`, `last_12_months`.`month`, `last_12_months`.`month_number`
order by `last_12_months`.`year` asc, `last_12_months`.`month_number` asc;

b. Explain query

-> Sort: last_12_months.`year`, last_12_months.month_number  (actual time=14.6..14.6 rows=12 loops=1)
-> Table scan on <temporary> (actual time=14.6..14.6 rows=12 loops=1)
-> Aggregate using temporary table (actual time=14.6..14.6 rows=12 loops=1)
-> Nested loop left join (cost=1825 rows=18144) (actual time=0.895..14.5 rows=96 loops=1)
-> Table scan on last_12_months (cost=1.42..3.85 rows=12) (actual time=0.144..0.147 rows=12 loops=1)
-> Union materialize with deduplication (cost=1.2..1.2 rows=12) (actual time=0.0153..0.0153 rows=12 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=50e-6..103e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=33e-6..55e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..48e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=33e-6..49e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=41e-6..56e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=39e-6..54e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..48e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..46e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..47e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..47e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..48e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=33e-6..48e-6 rows=1 loops=1)
-> Nested loop inner join (cost=542 rows=1512) (actual time=0.918..1.2 rows=7.92 loops=12)
-> Filter: (date_format(payments.created_at,'%Y-%m') = last_12_months.year_and_month) (cost=13.1 rows=1512) (actual time=0.902..1.13 rows=44.8 loops=12)
-> Table scan on payments (cost=13.1 rows=1512) (actual time=0.00946..0.524 rows=1506 loops=12)
-> Filter: (jod_jobs.company_id = 1) (cost=0.0208 rows=1) (actual time=0.00133..0.00134 rows=0.177 loops=537)
-> Single-row index lookup on jod_jobs using PRIMARY (id=payments.job_id) (cost=0.0208 rows=1) (actual time=0.0012..0.00122 rows=1 loops=537)

c. Query log time consumption

25.95 ms

d. Optimization

i. Add index

ALTER TABLE `payments` ADD INDEX (`created_at`);

ii. Init last 12 months using PHP Carbon\Carbon

$months = [];
$now = Carbon::now();
$months[] = [
'year_and_month' => $now->format('Y-m'),
'year' => $now->format('Y'),
'month' => $now->format('M'),
'month_number' => $now->format('n')
];
for ($i = 0; $i < 11; $i++) {
$now->subMonth();
array_unshift($months, [
'year_and_month' => $now->format('Y-m'),
'year' => $now->format('Y'),
'month' => $now->format('M'),
'month_number' => $now->format('n')
]);
}

iii. Optimize query

Get sum of admin_jod_credit per month from last 12 months

SELECT round(sum(admin_jod_credit),2) AS consumed_credits, DATE_FORMAT(payments.created_at, '%Y-%m') as pd
FROM payments
join jod_jobs on jod_jobs.id = payments.job_id
WHERE
jod_jobs.company_id = ?
and payments.created_at >= '2023-10-01 00:00:00'
group by pd
order by pd asc;

iv. Explain optimized query

-> Sort: pd  (actual time=2.89..2.89 rows=11 loops=1)
-> Table scan on <temporary> (actual time=2.87..2.88 rows=11 loops=1)
-> Aggregate using temporary table (actual time=2.87..2.87 rows=11 loops=1)
-> Nested loop inner join (cost=176 rows=23) (actual time=1.28..2.77 rows=95 loops=1)
-> Filter: ((payments.created_at between '2023-11-01 00:00:00' and '2024-10-31 23:59:59') and (payments.deleted_at is null) and (payments.job_id is not null)) (cost=157 rows=53.7) (actual time=1.26..2 rows=537 loops=1)
-> Table scan on payments (cost=157 rows=1512) (actual time=0.0236..0.62 rows=1506 loops=1)
-> Filter: (jod_jobs.company_id = 1) (cost=0.251 rows=0.428) (actual time=0.00134..0.00135 rows=0.177 loops=537)
-> Single-row index lookup on jod_jobs using PRIMARY (id=payments.job_id) (cost=0.251 rows=1) (actual time=0.00121..0.00122 rows=1 loops=537)

v. Query log time consumption

5.47 ms

vi. Conclusion

  • Simplify the query by grouping by created_at and adding an index for the query condition.
  • Reducing DATE_FORMAT Usage
    When possible, store dates in a format that is easier to filter or join (for example, separate years and months in YEAR and MONTH columns) so that DATE_FORMAT usage can be reduced or avoided. This will speed up performance, as DATE_FORMAT on columns often slows down queries.
  • Even though an index is created, MySQL does not use the index and chooses to scan the payments table. This is likely because the MySQL optimizer considers it better to scan the table than to use the index.
  • We can force the use of the index with FORCE INDEX if the execution in production is still slower.
Before Execution TimeAfter Execution TimePercentage
25.95 ms5.47 ms78.92%

2. Get received credits last 12 months

a. The query

select 
COALESCE(
ROUND(
SUM(
credits.assigned_credits
),
2
),
0
) AS received_credits,
last_12_months.month_number as month_number,
last_12_months.month as month,
last_12_months.year as year
from `credits`
right join (
SELECT '2023-11' AS year_and_month, 2023 AS year, 'Nov' AS month, 11 AS month_number
union
SELECT '2023-12' AS year_and_month, 2023 AS year, 'Dec' AS month, 12 AS month_number
union
SELECT '2024-01' AS year_and_month, 2024 AS year, 'Jan' AS month, 1 AS month_number
union
SELECT '2024-02' AS year_and_month, 2024 AS year, 'Feb' AS month, 2 AS month_number
union
SELECT '2024-03' AS year_and_month, 2024 AS year, 'Mar' AS month, 3 AS month_number
union
SELECT '2024-04' AS year_and_month, 2024 AS year, 'Apr' AS month, 4 AS month_number
union
SELECT '2024-05' AS year_and_month, 2024 AS year, 'May' AS month, 5 AS month_number
union
SELECT '2024-06' AS year_and_month, 2024 AS year, 'Jun' AS month, 6 AS month_number
union
SELECT '2024-07' AS year_and_month, 2024 AS year, 'Jul' AS month, 7 AS month_number
union
SELECT '2024-08' AS year_and_month, 2024 AS year, 'Aug' AS month, 8 AS month_number
union
SELECT '2024-09' AS year_and_month, 2024 AS year, 'Sep' AS month, 9 AS month_number
union
SELECT '2024-10' AS year_and_month, 2024 AS year, 'Oct' AS month, 10 AS month_number
) as last_12_months
on DATE_FORMAT(credits.created_at, "%Y-%m") = `last_12_months`.`year_and_month`
and `credits`.`company_id` = ?
and `credits`.`location_id` is null
group by `last_12_months`.`year`, `last_12_months`.`month`, `last_12_months`.`month_number`
order by `last_12_months`.`year` asc, `last_12_months`.`month_number` asc;

b. Explain query

-> Sort: last_12_months.`year`, last_12_months.month_number  (actual time=7.32..7.32 rows=12 loops=1)
-> Table scan on <temporary> (actual time=7.31..7.31 rows=12 loops=1)
-> Aggregate using temporary table (actual time=7.31..7.31 rows=12 loops=1)
-> Nested loop left join (cost=57.1 rows=152) (actual time=0.568..7.26 rows=17 loops=1)
-> Table scan on last_12_months (cost=1.42..3.85 rows=12) (actual time=0.017..0.0193 rows=12 loops=1)
-> Union materialize with deduplication (cost=1.2..1.2 rows=12) (actual time=0.0161..0.0161 rows=12 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=61e-6..112e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=44e-6..67e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=33e-6..49e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=31e-6..48e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..48e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=35e-6..51e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=31e-6..46e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..47e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=33e-6..49e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..49e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=33e-6..49e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=31e-6..46e-6 rows=1 loops=1)
-> Filter: ((credits.company_id = 1) and (date_format(credits.created_at,'%Y-%m') = last_12_months.year_and_month) and (credits.location_id is null)) (cost=3.28 rows=12.7) (actual time=0.587..0.602 rows=0.75 loops=12)
-> Index lookup on credits using credits_location_id_foreign (location_id=NULL) (cost=3.28 rows=12.7) (actual time=0.0125..0.561 rows=656 loops=12)

c. Query log time consumption

9.27 ms

d. Optimization

i. Add index

Already indexed above i. Add index

ii. Init last 12 months using PHP Carbon\Carbon

$months = [];
$now = Carbon::now();
$months[] = [
'year_and_month' => $now->format('Y-m'),
'year' => $now->format('Y'),
'month' => $now->format('M'),
'month_number' => $now->format('n')
];
for ($i = 0; $i < 11; $i++) {
$now->subMonth();
array_unshift($months, [
'year_and_month' => $now->format('Y-m'),
'year' => $now->format('Y'),
'month' => $now->format('M'),
'month_number' => $now->format('n')
]);
}

iii. Optimize query

Get sum of admin_jod_credit per month from last 12 months

SELECT round(sum(assigned_credits),2) AS received_credits, DATE_FORMAT(created_at, '%Y-%m') as pd
FROM credits
WHERE
company_id = ?
and created_at >= '2023-10-01 00:00:00'
and location_id is null
group by pd
order by pd asc;

iv. Explain optimized query

Estimated query execution time is 0.206 milliseconds

-> Sort: pd  (actual time=0.206..0.206 rows=4 loops=1)
-> Table scan on <temporary> (actual time=0.197..0.198 rows=4 loops=1)
-> Aggregate using temporary table (actual time=0.196..0.196 rows=4 loops=1)
-> Filter: ((credits.created_at >= TIMESTAMP'2023-10-01 00:00:00') and (credits.location_id is null)) (cost=9.71 rows=29.6) (actual time=0.145..0.169 rows=9 loops=1)
-> Index lookup on credits using credits_company_id_foreign (company_id=1) (cost=9.71 rows=115) (actual time=0.101..0.128 rows=115 loops=1)

v. Query log time consumption

2.46 ms

vi. Conclusion

  • Simplify the query by grouping by created_at and adding an index for the query condition.
  • Reducing DATE_FORMAT Usage
    When possible, store dates in a format that is easier to filter or join (for example, separate years and months in YEAR and MONTH columns) so that DATE_FORMAT usage can be reduced or avoided. This will speed up performance, as DATE_FORMAT on columns often slows down queries.
Before Execution TimeAfter Execution TimePercentage
9.27 ms2.46 ms73.46%

Summary of Optimizations

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Get consumed credits last 12 months25.95 ms5.47 ms78.92%---
Get received credits last 12 months9.27 ms2.46 ms73.46%---
sidebar_label: 'Educational Institutes'

Educational Institutes

Description

Get data for page home dashboard and return data summary education by status. This query can be simplified with a single query instead of multiple queries.

Optimizing Status

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/education/summary
  • Method: GET
  • Permission: get-summary-education-by-status
  • Possible Role Type: SUPER_ADMIN, USER_ADMIN, INTERNAL

Response

{
"data": {
"total": <int>, // total all education in database
"total_enable": <int>, // total enabled education
"total_disable": <int> // total disabled education
},
"message": "Retrieved successfully"
}

1. Get all educational institutes

a. The query

select count(*) as aggregate from `educational_institutes`;

b. Explain Query

-> Count rows in educational_institutes  (actual time=0.846..0.846 rows=1 loops=1)

c. Query log time consumption

3.83 ms

2. Get enabled educational institutes

a. The query

select count(*) as aggregate from `educational_institutes` where `status` = 1;

b. Explain Query

-> Aggregate: count(0)  (cost=69.3 rows=1) (actual time=0.407..0.407 rows=1 loops=1)
-> Filter: (educational_institutes.`status` = 1) (cost=63.1 rows=61.6) (actual time=0.243..0.383 rows=616 loops=1)
-> Table scan on educational_institutes (cost=63.1 rows=616) (actual time=0.0205..0.129 rows=616 loops=1)

c. Query log time consumption

0.33 ms

3. Get disabled educational institutes

a. The query

select count(*) as aggregate from `educational_institutes` where `status` = 0;

b. Explain Query

-> Aggregate: count(0)  (cost=69.3 rows=1) (actual time=0.129..0.129 rows=1 loops=1)
-> Filter: (educational_institutes.`status` = 0) (cost=63.1 rows=61.6) (actual time=0.128..0.128 rows=0 loops=1)
-> Table scan on educational_institutes (cost=63.1 rows=616) (actual time=0.0188..0.108 rows=616 loops=1)

c. Query log time consumption

0.2 ms

Optimizations

1. Add index

ALTER TABLE `educational_institutes` ADD INDEX (`status`);

2. Optimize query

select count(id) as aggregate, status
from `educational_institutes`
group by status;

3. Explain query

-> Group aggregate: count(educational_institutes.id)  (cost=125 rows=1) (actual time=0.171..0.171 rows=1 loops=1)
-> Covering index scan on educational_institutes using status (cost=63.1 rows=616) (actual time=0.0221..0.137 rows=616 loops=1)

4. Query log time consumption

5.49 ms

5. Conclusion

  • Aggregate queries with GROUP BY are usually better in terms of efficiency, resource consumption, and scalability, especially for large tables. But in this test case multiple queries are more efficient, because all have status = 1, and the number of rows is small.
  • This table should rarely change, it is highly recommended to use a cache with a long lifespan.

Implementation into code.

Update App\Repositories\EducationalInstituteRepositoryEloquent::educationSummaryByStatus

Without cache

public function educationSummaryByStatus()
{
$result = $this->select(DB::raw('count(*) as aggregate'), 'status')
->groupBy('status')
->get();

$total = 0;
$totalEnable = 0;
$totalDisable = 0;
foreach ($result as $row) {
$total += $row->aggregate;
if ($row->status == 1) {
$totalEnable += $row->aggregate;
} else {
$totalDisable += $row->aggregate;
}
}
return [
'total' => $total,
'total_enable' => $totalEnable,
'total_disable' => $totalDisable,
];
}

With cache

public function educationSummaryByStatus()
{
$repo = $this;
return Cache::remember('education-summary-status', 86400, function () use ($repo) {
$result = $repo->select(DB::raw('count(*) as aggregate'), 'status')
->groupBy('status')
->get();

$total = 0;
$totalEnable = 0;
$totalDisable = 0;
foreach ($result as $row) {
$total += $row->aggregate;
if ($row->status == 1) {
$totalEnable += $row->aggregate;
} else {
$totalDisable += $row->aggregate;
}
}
return [
'total' => $total,
'total_enable' => $totalEnable,
'total_disable' => $totalDisable,
];
});
}

Summary of optimizations

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Without cache4.36 ms5.49 ms-25.92 %---
sidebar_label: 'Future Payment'

Future Payment

Description

Get summary of unprocessed job payment

Optimizing Status

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/payment/futurePayment/summary
  • Method: GET
  • Permission: summary-job-payment-by-status
  • Possible Role Type: SUPER_ADMIN, USER_ADMIN, INTERNAL

Response

{
"data": {
"total": <int>,
},
"message": "Retrieved successfully"
}

1. Get total unprocessed payment

a. The query

select count(payments.id) as total
from `payments`
where
`payments`.`payment_status` = 1
and `payments`.`deleted_at` is null
limit 1;

b.Explain query

explain analyze 
select count(payments.id) as total
from `payments`
where
`payments`.`payment_status` = 1
and `payments`.`deleted_at` is null
limit 1;

-- result
-> Limit: 1 row(s) (cost=159 rows=1) (actual time=0.663..0.663 rows=1 loops=1)
-> Aggregate: count(payments.id) (cost=159 rows=1) (actual time=0.662..0.662 rows=1 loops=1)
-> Filter: ((payments.payment_status = 1) and (payments.deleted_at is null)) (cost=157 rows=15.1) (actual time=0.0303..0.629 rows=840 loops=1)
-> Table scan on payments (cost=157 rows=1512) (actual time=0.0272..0.529 rows=1506 loops=1)

c. Query log time consumption

4.51 ms

d. Optimization

i. Add index

ALTER TABLE `jod`.`payments` ADD INDEX (`payment_status`, `deleted_at`);

ii. Explain query

-> Limit: 1 row(s)  (cost=170 rows=1) (actual time=0.403..0.404 rows=1 loops=1)
-> Aggregate: count(payments.id) (cost=170 rows=1) (actual time=0.402..0.402 rows=1 loops=1)
-> Filter: (payments.deleted_at is null) (cost=85.8 rows=842) (actual time=0.0338..0.354 rows=842 loops=1)
-> Covering index lookup on payments using payment_status (payment_status=1, deleted_at=NULL) (cost=85.8 rows=842) (actual time=0.0326..0.294 rows=842 loops=1)

iii. Query log time consumption

0.82 ms

iv. Conclusion

  • For COUNT() queries, the best way to improve performance is to ensure that the filter condition (WHERE) has a good index. With the right index, MySQL can count rows that match the condition faster, without the need for a full table scan.
Before Execution TimeAfter Execution TimePercentage
4.51 ms0.82 ms81.82%

Summary of Optimizations

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Get total unprocessed payment4.51 ms0.82 ms81.82%---

Get consumed credits by HQ

Description

Get consumed credits by HQ

Optimizing Status

Already optimized

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/payment/consumed-credits-by-hq
  • Method: GET
  • Permission: consumed-credits-by-hq
  • Possible Role Type: HQ, AREA, LOCATION

Response

{
"data": {
"consumed_credits_current_month": <double>,
"consumed_credits_previous_month": <double>
},
"message": "Get consumed credits successfully"
}

1. Get sum consumed credit

a. The query

This endpoint runs the same 2 queries, for the current and previous month.

select SUM(payments.admin_jod_credit) as consumed_credits 
from `payments`
inner join `locations` on `payments`.`location_id` = `locations`.`id`
where
`locations`.`company_id` = <current user company_id>
and year(`payments`.`created_at`) = <year of current/prev month>
and month(`payments`.`created_at`) = <month of current/prev month>
and `payments`.`deleted_at` is null
limit 1;

b. Explain Query

-> Limit: 1 row(s)  (cost=709 rows=1) (actual time=2.74..2.74 rows=1 loops=1)
-> Aggregate: sum(payments.admin_jod_credit) (cost=709 rows=1) (actual time=2.74..2.74 rows=1 loops=1)
-> Nested loop inner join (cost=696 rows=122) (actual time=2.74..2.74 rows=0 loops=1)
-> Filter: ((year(payments.created_at) = 2024) and (month(payments.created_at) = 10) and (payments.location_id is not null)) (cost=169 rows=1506) (actual time=2.6..2.73 rows=1 loops=1)
-> Index lookup on payments using deleted_at (deleted_at=NULL), with index condition: (payments.deleted_at is null) (cost=169 rows=1506) (actual time=0.0681..2.08 rows=1506 loops=1)
-> Filter: (locations.company_id = 1) (cost=0.25 rows=0.0812) (actual time=0.0134..0.0134 rows=0 loops=1)
-> Single-row index lookup on locations using PRIMARY (id=payments.location_id) (cost=0.25 rows=1) (actual time=0.0121..0.0121 rows=1 loops=1)

c. Query log time consumption

3.71 ms + 2.19 ms = 5.9 ms

d. Conclusion

No optimization needed, this query is already optimized using indexed keys.--- sidebar_label: 'Hiring Overview'

Hiring Overview (Super HQ Internal)

Description

Get total_active_jobs, total_open_jobs, total_completed_jobs_of_current_month and hiring chart of a company

Optimizing Status

Contract

  • URL: /api/portal/dashboard/hiring-overview
  • Method: GET
  • Permission: get-hiring-overview

Response

{
"data": {
"total_today_hired": "0",
"total_hired_of_current_month": "0",
"hiring_charts": [
{
"total_hired_staffs": "14",
"month_number": 11,
"month": "Nov",
"year": 2023
},
...
]
},
"message": "Retrieved successfully"
}

1. Get overview hiring

a. The query

select
COALESCE(
SUM(
CASE
WHEN DATE_FORMAT(jod_jobs.hired_datetime, "%Y-%m-%d") = DATE_FORMAT(CURRENT_DATE(), "%Y-%m-%d")
THEN jod_jobs.no_users_selected
ELSE 0
END
), 0) AS total_today_hired,
COALESCE(
SUM(
CASE
WHEN DATE_FORMAT(jod_jobs.hired_datetime, "%Y-%m") = DATE_FORMAT(CURRENT_DATE(), "%Y-%m")
THEN jod_jobs.no_users_selected
ELSE 0
END
), 0) AS total_hired_of_current_month
from `jod_jobs` where `jod_jobs`.`company_id` = 1
limit 1;

b. Explain Query

-> Limit: 1 row(s)  (cost=1158 rows=1) (actual time=16.7..16.7 rows=1 loops=1)
-> Aggregate: sum((case when (date_format(jod_jobs.hired_datetime,'%Y-%m') = date_format(curdate(),'%Y-%m')) then jod_jobs.no_users_selected else 0 end)), sum((case when (date_format(jod_jobs.hired_datetime,'%Y-%m-%d') = date_format(curdate(),'%Y-%m-%d')) then jod_jobs.no_users_selected else 0 end)) (cost=1158 rows=1) (actual time=11..11 rows=1 loops=1)
-> Index lookup on jod_jobs using jod_jobs_company_id_foreign (company_id=1) (cost=712 rows=4462) (actual time=0.184..7.68 rows=4462 loops=1)

c. Query log time consumption

14.83 ms

d. Optimization

i. Add index

ALTER TABLE `jod`.`jod_jobs` ADD INDEX (`hired_datetime`, `deleted_at`);

ii. Optimize query

Get sum of no_users_selected per day, where hired_datetime between first day of months until now.

select sum(no_users_selected) as total, DATE_FORMAT(hired_datetime, '%Y-%m-%d') as pd 
from `jod_jobs`
where
`company_id` = 1
and `hired_datetime` between '2024-10-01 00:00:00' and '2024-10-31 23:59:59'
and `jod_jobs`.`deleted_at` is null
group by `pd`
order by `pd` asc;

iii. Explain optimized query

-> Sort: pd  (actual time=0.0796..0.0796 rows=0 loops=1)
-> Table scan on <temporary> (actual time=0.0754..0.0754 rows=0 loops=1)
-> Aggregate using temporary table (actual time=0.0744..0.0744 rows=0 loops=1)
-> Filter: ((jod_jobs.company_id = 1) and (jod_jobs.deleted_at is null)) (cost=10.2 rows=0.941) (actual time=0.0621..0.0621 rows=0 loops=1)
-> Index range scan on jod_jobs using hired_datetime over ('2024-10-01 00:00:00' <= hired_datetime <= '2024-10-31 23:59:59'), with index condition: (jod_jobs.hired_datetime between '2024-10-01 00:00:00' and '2024-10-31 23:59:59') (cost=10.2 rows=22) (actual time=0.0184..0.0592 rows=22 loops=1)

iv. Query log time consumption

0.73 ms

v. Conclusion

  • DATE_FORMAT on hired_datetime prevents the use of indexes on this column, as MySQL needs to evaluate the function for each row, which is computationally expensive. For larger datasets, this would result in slower performance since full scans might be performed on the date range, even with an index on hired_datetime.
  • Limited Scalability:
    As the number of rows grows (e.g., 100,000+), the performance will degrade because of the overhead of date formatting on every row. The query currently takes 16.7 ms for 4462 rows, but as the number of rows increases, the time will scale poorly.
  • MySQL can handle date range queries more efficiently if no formatting is applied to the hired_datetime column. This can allow the query to use indexes on hired_datetime, improving performance.
Before Execution TimeAfter Execution TimePercentage
14.83 ms0.73 ms95.08%

2. Get hiring chart

a. The query

select 
COALESCE(SUM(jod_jobs.no_users_selected),0) AS total_hired_staffs,
last_12_months.month_number as month_number,
last_12_months.month as month,
last_12_months.year as year
from `jod_jobs`
right join (
SELECT '2023-11' AS year_and_month, 2023 AS year, 'Nov' AS month, 11 AS month_number
union
SELECT '2023-12' AS year_and_month, 2023 AS year, 'Dec' AS month, 12 AS month_number
union
SELECT '2024-01' AS year_and_month, 2024 AS year, 'Jan' AS month, 1 AS month_number
union
SELECT '2024-02' AS year_and_month, 2024 AS year, 'Feb' AS month, 2 AS month_number
union
SELECT '2024-03' AS year_and_month, 2024 AS year, 'Mar' AS month, 3 AS month_number
union
SELECT '2024-04' AS year_and_month, 2024 AS year, 'Apr' AS month, 4 AS month_number
union
SELECT '2024-05' AS year_and_month, 2024 AS year, 'May' AS month, 5 AS month_number
union
SELECT '2024-06' AS year_and_month, 2024 AS year, 'Jun' AS month, 6 AS month_number
union
SELECT '2024-07' AS year_and_month, 2024 AS year, 'Jul' AS month, 7 AS month_number
union
SELECT '2024-08' AS year_and_month, 2024 AS year, 'Aug' AS month, 8 AS month_number
union
SELECT '2024-09' AS year_and_month, 2024 AS year, 'Sep' AS month, 9 AS month_number
union
SELECT '2024-10' AS year_and_month, 2024 AS year, 'Oct' AS month, 10 AS month_number
) as last_12_months
on DATE_FORMAT(jod_jobs.hired_datetime, "%Y-%m") = `last_12_months`.`year_and_month`
and `jod_jobs`.`company_id` = 1
group by `last_12_months`.`year`, `last_12_months`.`month` , `last_12_months`.`month_number`
order by `last_12_months`.`year` asc, `last_12_months`.`month_number` asc;

b. Explain query

-> Sort: last_12_months.`year`, last_12_months.month_number  (actual time=57.8..57.8 rows=12 loops=1)
-> Table scan on <temporary> (actual time=57.8..57.8 rows=12 loops=1)
-> Aggregate using temporary table (actual time=57.8..57.8 rows=12 loops=1)
-> Nested loop left join (cost=8514 rows=53712) (actual time=6.84..57.5 rows=290 loops=1)
-> Table scan on last_12_months (cost=1.42..3.85 rows=12) (actual time=0.0244..0.0317 rows=12 loops=1)
-> Union materialize with deduplication (cost=1.2..1.2 rows=12) (actual time=0.023..0.023 rows=12 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=93e-6..174e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=70e-6..107e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=50e-6..76e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..75e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..75e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=50e-6..73e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=50e-6..75e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=48e-6..73e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..73e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..73e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..74e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=48e-6..73e-6 rows=1 loops=1)
-> Filter: (date_format(jod_jobs.hired_datetime,'%Y-%m') = last_12_months.year_and_month) (cost=299 rows=4476) (actual time=4.5..4.79 rows=24.1 loops=12)
-> Index lookup on jod_jobs using jod_jobs_company_id_foreign (company_id=1) (cost=299 rows=4476) (actual time=0.0923..4.44 rows=4476 loops=12)

c. Query log time consumption

60.67 ms

d. Optimization

i. Add index

Same as above i. Add index

ii. Init last 12 months using PHP Carbon\Carbon

$months = [];
$now = Carbon::now();
$months[] = [
'year_and_month' => $now->format('Y-m'),
'year' => $now->format('Y'),
'month' => $now->format('M'),
'month_number' => $now->format('n')
];
for ($i = 0; $i < 11; $i++) {
$now->subMonth();
array_unshift($months, [
'year_and_month' => $now->format('Y-m'),
'year' => $now->format('Y'),
'month' => $now->format('M'),
'month_number' => $now->format('n')
]);
}

iii. Optimize query

Get sum of no_users_selected per month from last 12 months

select sum(no_users_selected) as total, DATE_FORMAT(hired_datetime, '%Y-%m') as pd 
from `jod_jobs`
where
`company_id` = 1
and `hired_datetime` between '2023-11-01 00:00:00' and '2024-10-31 23:59:59'
and `jod_jobs`.`deleted_at` is null
group by `pd`
order by `pd` asc;

iv. Explain optimized query

Estimated query execution time is 2.37 milliseconds

-> Sort: pd  (actual time=2.37..2.37 rows=11 loops=1)
-> Table scan on <temporary> (actual time=2.36..2.36 rows=11 loops=1)
-> Aggregate using temporary table (actual time=2.36..2.36 rows=11 loops=1)
-> Filter: ((jod_jobs.company_id = 1) and (jod_jobs.deleted_at is null)) (cost=641 rows=60.9) (actual time=0.024..2.19 rows=289 loops=1)
-> Index range scan on jod_jobs using hired_datetime over ('2023-11-01 00:00:00' <= hired_datetime <= '2024-10-31 23:59:59'), with index condition: (jod_jobs.hired_datetime between '2023-11-01 00:00:00' and '2024-10-31 23:59:59') (cost=641 rows=1423) (actual time=0.0191..2.12 rows=1423 loops=1)

v. Query log time consumption

8.9 ms

vi. Conclusion

  • Lack of Optimization in Join.
    RIGHT JOIN is used, even though in this case it is more logical to use LEFT JOIN, because we want to retrieve data from the subquery (last_12_months) first. Using LEFT JOIN is usually more optimal, because MySQL processes tables from left to right.
  • Potential Performance Issues for Large Datasets.
    If the jod_jobs table is very large, using DATE_FORMAT on the hired_datetime column to perform a join can be a performance drag. This function is applied to every record, which prevents MySQL from using indexes efficiently. An alternative solution is to perform the join using a range query, such as BETWEEN or YEAR() and MONTH().
  • The results of query optimization can be summed up in the application.
Before Execution TimeAfter Execution TimePercentage
60.67 ms8.9 ms85.33%

Summary of Optimizations

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Get overview hiring12.5 ms0.73 ms95.08%---
Get hiring chart60.67 ms8.9 ms85.33%---

HQ Manager

Description

Get data for page home dashboard and return data summary HQ user
This query can be simplified with a single query instead of multiple queries.

Optimizing Status

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/hquser/summary
  • Method: GET
  • Permission: summary-hq-user
  • Possible Role Type: SUPER_ADMIN, USER_ADMIN, INTERNAL

Response

{
"data": {
"total": <int>, // total hq user in database
"total_enable": <int>, // total enabled hq user
"total_disable": <int> // total disabled hq user
},
"message": "Retrieved successfully"
}

Queries

1. Get all HQ users

a. The query

select count(*) as aggregate from `users` where `user_type` = 'HQ';

b. Explain Query

explain analyze select count(*) as aggregate from `users` where `user_type` = 'HQ';

-- result
-> Aggregate: count(0) (cost=460 rows=1) (actual time=1.61..1.61 rows=1 loops=1)
-> Filter: (users.user_type = 'HQ') (cost=422 rows=382) (actual time=0.344..1.57 rows=819 loops=1)
-> Covering index scan on users using unique_email_user_type (cost=422 rows=3820) (actual time=0.336..1.27 rows=4006 loops=1)

c. Query log time consumption

6.9 ms

2. Get enabled HQ users

a. The query

select count(*) as aggregate from `users` where `user_type` = 'HQ' and `is_deleted` = 0 and `status` = 1

b. Explain Query

explain select count(*) as aggregate from `users` where `user_type` = 'HQ' and `is_deleted` = 0 and `status` = 1;

-- result
-> Aggregate: count(0) (cost=423 rows=1) (actual time=6.68..6.68 rows=1 loops=1)
-> Filter: ((users.`status` = 1) and (users.is_deleted = 0) and (users.user_type = 'HQ')) (cost=422 rows=3.82) (actual time=0.658..6.63 rows=819 loops=1)
-> Table scan on users (cost=422 rows=3820) (actual time=0.652..6.2 rows=4006 loops=1)

c. Query log time consumption

13.01 ms

3. Get disabled HQ users

a. The query

select count(*) as aggregate from `users` where `user_type` = 'HQ' and `is_deleted` = 0 and `status` = 0;

b. Explain Query

explain analyze select count(*) as aggregate from `users` where `user_type` = 'HQ' and `is_deleted` = 0 and `status` = 0;

-- result
-> Aggregate: count(0) (cost=423 rows=1) (actual time=2.69..2.69 rows=1 loops=1)
-> Filter: ((users.`status` = 0) and (users.is_deleted = 0) and (users.user_type = 'HQ')) (cost=422 rows=3.82) (actual time=2.69..2.69 rows=0 loops=1)
-> Table scan on users (cost=422 rows=3820) (actual time=0.3..2.49 rows=4006 loops=1)

c. Query log time consumption

4.29 ms

Optimizations

1. Add index if necessary

Check prod to make sure user_type & status are not indexed. It has the same index as Applicants above

ALTER TABLE `users` ADD INDEX (`user_type`, `is_deleted`, `status`);

2. Optimize query

select count(id) as aggregate, status
from `users`
where
`user_type` = 'HQ'
and `is_deleted` = 0
group by status;

3. Explain query

explain ANALYZE
select count(id) as aggregate, status
from `users`
where
`user_type` = 'HQ'
and `is_deleted` = 0
group by status;

-- result
-> Group aggregate: count(users.id) (cost=254 rows=61.8) (actual time=0.392..0.392 rows=1 loops=1)
-> Covering index lookup on users using user_type (user_type='HQ', is_deleted=0) (cost=172 rows=819) (actual time=0.0183..0.351 rows=819 loops=1)

4. Query log time consumption

8.92 ms

5. Conclusion

  • Aggregate queries with GROUP BY are usually better in terms of efficiency, resource consumption, and scalability, especially for large tables.
  • If there are a lot of operations performed on the user_type and is_deleted columns, this index will be very helpful in speeding up other queries that utilize similar filters.
  • Status field is to indicate active or inactive user. so if we need get all HQ user, we fetch user with flag is_deleted = 0 only.

Implementation into code.

Update App\Http\Controllers\HqUserController::hqUserSummaryByStatus

Without Cache

public function hqUserSummaryByStatus(SummaryHqUserByStatusRequest $request)
{
$result = $this->userRepository
->select(DB::raw('count(id) as aggregate'), 'status')
->where('user_type', 'HQ')
->groupBy('status')
->get();

// init result
$total = 0;
$totalEnableHqUser = 0;
$totalDisableHqUser = 0;

foreach ($result as $row) {
$total += $row->aggregate;
if ($row->status == 1) {
$totalEnableHqUser += $row->aggregate;
} else {
$totalDisableHqUser += $row->aggregate;
}
}

return response()->json([
'data' => [
'total' => $total,
'total_enable' => $totalEnableHqUser,
'total_disable' => $totalDisableHqUser,
],
'message' => __('messages.portal.hqusers.success.hqUserSummaryByStatus')
], 200);
}

Summary of optimizations

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Without cache24.2 ms8.92 ms63.14 %---
sidebar_label: 'Job Overview'

Job Overview (Super HQ Internal & HQ Manager)

Description

Get completed jobs statistics

Optimizing Status

Contract

  • URL: /api/portal/dashboard/jobs-overview
  • Method: GET
  • Permission: get-completed-jobs-statistics

Response

{
"data": {
"total_active_jobs": "0",
"total_open_jobs": "0",
"total_completed_jobs_of_current_month": "0",
"total_cancelled_jobs": "5",
"job_charts": [
{
"status": 1,
"type": 1,
"data": [
{
"total_jobs": 0,
"month_number": 11,
"month": "Nov",
"year": 2023
},
...
]
},
...
]
},
"message": "Retrieved successfully"
}

1. Get overview total jobs

a. The query

select COALESCE(
SUM(
jod_jobs.status IN (2)
AND DATE_FORMAT(jod_jobs.job_start_date, "%Y-%m") = DATE_FORMAT(CURRENT_DATE(), '%Y-%m')
), 0
) AS total_active_jobs,
COALESCE(
SUM(
jod_jobs.status IN (1)
AND DATE_FORMAT(jod_jobs.job_start_date, "%Y-%m") = DATE_FORMAT(CURRENT_DATE(), '%Y-%m')
), 0
) AS total_open_jobs,
COALESCE(
SUM(
jod_jobs.status IN (3,16)
AND DATE_FORMAT(jod_jobs.job_start_date, "%Y-%m") = DATE_FORMAT(CURRENT_DATE(), '%Y-%m')
), 0
) AS total_completed_jobs_of_current_month,
COALESCE(
SUM(
jod_jobs.status IN (4,15,11,13,7,8)
AND DATE_FORMAT(jod_jobs.job_start_date, "%Y-%m") = DATE_FORMAT(CURRENT_DATE(), '%Y-%m')
), 0
) AS total_cancelled_jobs
from `jod_jobs`
where `jod_jobs`.`company_id` = 1
limit 1;

b. Explain Query

-> Limit: 1 row(s)  (cost=1151 rows=1) (actual time=10.8..10.8 rows=1 loops=1)
-> Aggregate: sum(((jod_jobs.`status` in (4,15,11,13,7,8)) and (date_format(jod_jobs.job_start_date,'%Y-%m') = date_format(curdate(),'%Y-%m')))), sum(((jod_jobs.`status` in (3,16)) and (date_format(jod_jobs.job_start_date,'%Y-%m') = date_format(curdate(),'%Y-%m')))), sum(((jod_jobs.`status` = 1) and (date_format(jod_jobs.job_start_date,'%Y-%m') = date_format(curdate(),'%Y-%m')))), sum(((jod_jobs.`status` = 2) and (date_format(jod_jobs.job_start_date,'%Y-%m') = date_format(curdate(),'%Y-%m')))) (cost=1151 rows=1) (actual time=10.4..10.4 rows=1 loops=1)
-> Index lookup on jod_jobs using jod_jobs_company_id_foreign (company_id=1) (cost=705 rows=4462) (actual time=0.105..6.11 rows=4462 loops=1)

c. Query log time consumption

9.7 ms

d. Optimization

i. Add index

ALTER TABLE `jod_jobs` ADD INDEX (`job_start_date`, `status`, `deleted_at`);

ii. Optimize query

select count(id) as total, status 
from `jod_jobs`
where
`company_id` = 1
and `job_start_date`
between '2024-10-01 00:00:00' and '2024-10-31 23:59:59'
and `status` in (1,2,3,16,4,15,11,13,7,8)
and `jod_jobs`.`deleted_at` is null
group by `status`;

iii. Explain optimized query

-> Table scan on <temporary>  (actual time=0.0223..0.0223 rows=0 loops=1)
-> Aggregate using temporary table (actual time=0.0214..0.0214 rows=0 loops=1)
-> Filter: (jod_jobs.company_id = 1) (cost=2.21 rows=0.05) (actual time=0.0127..0.0127 rows=0 loops=1)
-> Index range scan on jod_jobs using idx_scrum_637_job_start_date over ('2024-10-01 00:00:00' <= job_start_date <= '2024-10-31 23:59:59' AND 1 <= status <= 16 AND deleted_at = NULL), with index condition: ((jod_jobs.job_start_date between '2024-10-01 00:00:00' and '2024-10-31 23:59:59') and (jod_jobs.`status` in (1,2,3,16,4,15,11,13,7,8)) and (jod_jobs.deleted_at is null)) (cost=2.21 rows=1) (actual time=0.0118..0.0118 rows=0 loops=1)

iv. Query log time consumption

2.88 ms

v. Conclusion

  1. Adding index without changing query does not show significant result, even slower than before adding index.
  2. Query can be simplified by grouping total by status. Which result can be same as previous query. And data manipulation is done in application not database.
  3. The optimization results are better in terms of performance because they are faster and process less data, with more efficient use of indexes (index range scan).
Before Execution TimeAfter Execution TimePercentage
9.7 ms2.88 ms70.31%

2 Get job chart #1

Get job chart for open, active, completed, cancelled, rejected by all manager

a. The query

  • Company id using company_id from current user
  • Query loop with this statuses (5 queries):
    • JOB_JOD_STATUS_OPENING (1)
    • JOB_JOD_STATUS_ACTIVE (2)
    • JOB_JOD_STATUS_COMPLETED (3), JOB_JOD_STATUS_PARTIALLY_COMPLETED (16)
    • JOB_JOD_STATUS_CANCELED (4), JOB_JOD_STATUS_CLOSED_CANCELLED (15), JOB_JOD_STATUS_ALL_CANCELLED (11), JOB_JOD_STATUS_ACTIVE_WITH_CANCELLED (13), JOB_JOD_STATUS_EXPIRED_NO_APPLICANTS (7), JOB_JOD_STATUS_EXPIRED_NO_SELECTION (8)
    • JOB_JOD_STATUS_ALL_REJECTED_BY_MANAGER(10), JOB_JOD_STATUS_ACTIVE_WITH_REJECTED(12), JOB_JOD_STATUS_CLOSED_REJECTED(14)
select 
COUNT(jod_jobs.id) AS total_jobs,
last_12_months.month_number as month_number,
last_12_months.month as month,
last_12_months.year as year
from `jod_jobs`
right join (
SELECT '2023-11' AS year_and_month, 2023 AS year, 'Nov' AS month, 11 AS month_number
union
SELECT '2023-12' AS year_and_month, 2023 AS year, 'Dec' AS month, 12 AS month_number
union
SELECT '2024-01' AS year_and_month, 2024 AS year, 'Jan' AS month, 1 AS month_number
union
SELECT '2024-02' AS year_and_month, 2024 AS year, 'Feb' AS month, 2 AS month_number
union
SELECT '2024-03' AS year_and_month, 2024 AS year, 'Mar' AS month, 3 AS month_number
union
SELECT '2024-04' AS year_and_month, 2024 AS year, 'Apr' AS month, 4 AS month_number
union
SELECT '2024-05' AS year_and_month, 2024 AS year, 'May' AS month, 5 AS month_number
union
SELECT '2024-06' AS year_and_month, 2024 AS year, 'Jun' AS month, 6 AS month_number
union
SELECT '2024-07' AS year_and_month, 2024 AS year, 'Jul' AS month, 7 AS month_number
union
SELECT '2024-08' AS year_and_month, 2024 AS year, 'Aug' AS month, 8 AS month_number
union
SELECT '2024-09' AS year_and_month, 2024 AS year, 'Sep' AS month, 9 AS month_number
union
SELECT '2024-10' AS year_and_month, 2024 AS year, 'Oct' AS month, 10 AS month_number
) as last_12_months
on DATE_FORMAT(jod_jobs.job_start_date, "%Y-%m") = `last_12_months`.`year_and_month`
and `jod_jobs`.`company_id` = ?
and `jod_jobs`.`status` in (?)
group by `last_12_months`.`year`, `last_12_months`.`month`, `last_12_months`.`month_number`
order by `last_12_months`.`year` asc, `last_12_months`.`month_number` asc;

b. Explain Query

There is 5 queries, this is example of 1 queries

-> Sort: last_12_months.`year`, last_12_months.month_number  (actual time=1.25..1.25 rows=12 loops=1)
-> Table scan on <temporary> (actual time=1.23..1.24 rows=12 loops=1)
-> Aggregate using temporary table (actual time=1.23..1.23 rows=12 loops=1)
-> Left hash join (date_format(jod_jobs.job_start_date,'%Y-%m') = last_12_months.year_and_month) (cost=4786 rows=4224) (actual time=1.16..1.17 rows=54 loops=1)
-> Table scan on last_12_months (cost=1.42..3.85 rows=12) (actual time=0.0611..0.0636 rows=12 loops=1)
-> Union materialize with deduplication (cost=1.2..1.2 rows=12) (actual time=0.0592..0.0592 rows=12 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=92e-6..150e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=77e-6..117e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=61e-6..98e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=152e-6..188e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=63e-6..100e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=58e-6..95e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=47e-6..84e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=126e-6..163e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=123e-6..160e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=47e-6..84e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=124e-6..161e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=137e-6..174e-6 rows=1 loops=1)
-> Hash
-> Filter: ((jod_jobs.company_id = 1) and (jod_jobs.`status` in (10,12,14))) (cost=338 rows=352) (actual time=0.0593..1.01 rows=49 loops=1)
-> Index range scan on jod_jobs using jod_jobs_status_index over (status = 10) OR (status = 12) OR (status = 14) (cost=338 rows=352) (actual time=0.0551..0.979 rows=351 loops=1)

c. Query log time consumption

Total time 131.5 ms

  • JOB_JOD_STATUS_OPENING = 0.76 ms
  • JOB_JOD_STATUS_ACTIVE = 0.68 ms
  • JOB_JOD_STATUS_COMPLETED &JOB_JOD_STATUS_PARTIALLY_COMPLETED = 63.31 ms
  • JOB_JOD_STATUS_CANCELED , JOB_JOD_STATUS_CLOSED_CANCELLED , JOB_JOD_STATUS_ALL_CANCELLED , JOB_JOD_STATUS_ACTIVE_WITH_CANCELLED , JOB_JOD_STATUS_EXPIRED_NO_APPLICANTS , JOB_JOD_STATUS_EXPIRED_NO_SELECTION = 65.4 ms
  • JOB_JOD_STATUS_ALL_REJECTED_BY_MANAGER, JOB_JOD_STATUS_ACTIVE_WITH_REJECTED, JOB_JOD_STATUS_CLOSED_REJECTED = 1.35 ms

d. Optimization

i. Add index

Using add index above i. Add index

ii. Optimize query

select COUNT(jod_jobs.id) AS total_jobs, status, DATE_FORMAT(job_start_date, '%Y-%m') as pd 
from `jod_jobs`
where
`company_id` = 1
and `job_start_date` between '2023-11-01 00:00:00' and '2024-10-31 23:59:59'
and `status` in (1,2,3,16,4,15,11,13,7,8,10,12,14)
and `jod_jobs`.`deleted_at` is null
group by `pd`, `status`
order by `pd` asc, `status` asc

iii. Explain optimized query

-> Sort: pd, jod_jobs.`status`  (actual time=6.43..6.44 rows=72 loops=1)
-> Table scan on <temporary> (actual time=6.4..6.4 rows=72 loops=1)
-> Aggregate using temporary table (actual time=6.4..6.4 rows=72 loops=1)
-> Filter: ((jod_jobs.job_start_date between '2023-11-01 00:00:00' and '2024-10-31 23:59:59') and (jod_jobs.`status` in (1,2,3,16,4,15,11,13,7,8,10,12,14)) and (jod_jobs.deleted_at is null)) (cost=273 rows=117) (actual time=5.25..6.07 rows=589 loops=1)
-> Index lookup on jod_jobs using jod_jobs_company_id_foreign (company_id=1) (cost=273 rows=4476) (actual time=0.0751..4.82 rows=4476 loops=1)

iv. Query log time consumption

18.22 ms

v. Conclusion

  • Lack of Optimization in Join.
    RIGHT JOIN is used, even though in this case it is more logical to use LEFT JOIN, because we want to retrieve data from the sub query (last_12_months) first. Using LEFT JOIN is usually more optimal, because MySQL processes tables from left to right.
  • Potential Performance Issues for Large Datasets.
    If the jod_jobs table is very large, using DATE_FORMAT on the job_start_date column to perform a join can be a performance drag. This function is applied to every record, which prevents MySQL from using indexes efficiently. An alternative solution is to perform the join using a range query, such as BETWEEN or YEAR() and MONTH().
Before Execution TimeAfter Execution TimePercentage
131.5 ms18.22 ms86.14%

3. Get job chart #2

Get job chart for not show applicants

a. The query

  • Company id using company_id from current user
  • Query with status: SLOT_USER_STATUS_NO_SHOW (8)
select 
COUNT(jod_jobs.id) AS total_jobs,
last_12_months.month_number as month_number,
last_12_months.month as month,
last_12_months.year as year
from `jod_jobs`
left join `slots` on `jod_jobs`.`id` = `slots`.`jod_job_id`
left join `slot_user` on `slots`.`id` = `slot_user`.`slot_id`
right join (
SELECT '2023-11' AS year_and_month, 2023 AS year, 'Nov' AS month, 11 AS month_number
union
SELECT '2023-12' AS year_and_month, 2023 AS year, 'Dec' AS month, 12 AS month_number
union
SELECT '2024-01' AS year_and_month, 2024 AS year, 'Jan' AS month, 1 AS month_number
union
SELECT '2024-02' AS year_and_month, 2024 AS year, 'Feb' AS month, 2 AS month_number
union
SELECT '2024-03' AS year_and_month, 2024 AS year, 'Mar' AS month, 3 AS month_number
union
SELECT '2024-04' AS year_and_month, 2024 AS year, 'Apr' AS month, 4 AS month_number
union
SELECT '2024-05' AS year_and_month, 2024 AS year, 'May' AS month, 5 AS month_number
union
SELECT '2024-06' AS year_and_month, 2024 AS year, 'Jun' AS month, 6 AS month_number
union
SELECT '2024-07' AS year_and_month, 2024 AS year, 'Jul' AS month, 7 AS month_number
union
SELECT '2024-08' AS year_and_month, 2024 AS year, 'Aug' AS month, 8 AS month_number
union
SELECT '2024-09' AS year_and_month, 2024 AS year, 'Sep' AS month, 9 AS month_number
union
SELECT '2024-10' AS year_and_month, 2024 AS year, 'Oct' AS month, 10 AS month_number
) as last_12_months
on DATE_FORMAT(jod_jobs.job_start_date, "%Y-%m") = `last_12_months`.`year_and_month`
and `jod_jobs`.`company_id` = ?
and `slot_user`.`status` = ?
group by `last_12_months`.`year`, `last_12_months`.`month`, `last_12_months`.`month_number`
order by `last_12_months`.`year` asc, `last_12_months`.`month_number` asc;

b. Explain Query

-> Sort: last_12_months.`year`, last_12_months.month_number  (actual time=48.1..48.1 rows=12 loops=1)
-> Table scan on <temporary> (actual time=48..48 rows=12 loops=1)
-> Aggregate using temporary table (actual time=48..48 rows=12 loops=1)
-> Nested loop left join (cost=1837 rows=18204) (actual time=5.36..47.9 rows=113 loops=1)
-> Table scan on last_12_months (cost=1.42..3.85 rows=12) (actual time=0.266..0.274 rows=12 loops=1)
-> Union materialize with deduplication (cost=1.2..1.2 rows=12) (actual time=0.264..0.264 rows=12 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=101e-6..184e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=74e-6..99e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..72e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=50e-6..75e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..74e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..73e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=48e-6..72e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=47e-6..71e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..72e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=48e-6..73e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=46e-6..70e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..72e-6 rows=1 loops=1)
-> Nested loop inner join (cost=1480 rows=1517) (actual time=2.87..3.97 rows=9.42 loops=12)
-> Nested loop inner join (cost=949 rows=1517) (actual time=0.01..2.19 rows=1517 loops=12)
-> Covering index lookup on slot_user using slot_user_status_index (status=8) (cost=13.7 rows=1517) (actual time=0.00797..0.403 rows=1517 loops=12)
-> Filter: (slots.id = slot_user.slot_id) (cost=0.0431 rows=1) (actual time=996e-6..0.00105 rows=1 loops=18204)
-> Single-row index lookup on slots using PRIMARY (id=slot_user.slot_id) (cost=0.0431 rows=1) (actual time=893e-6..911e-6 rows=1 loops=18204)
-> Filter: ((jod_jobs.company_id = 1) and (date_format(jod_jobs.job_start_date,'%Y-%m') = last_12_months.year_and_month)) (cost=0.0208 rows=1) (actual time=0.0011..0.0011 rows=0.00621 loops=18204)
-> Single-row index lookup on jod_jobs using PRIMARY (id=slots.jod_job_id) (cost=0.0208 rows=1) (actual time=845e-6..862e-6 rows=1 loops=18204)

c. Query log query consumption

35.25 ms

d. Optimization

i. Add index

Using add index above i. Add index

ii. Optimize query

select COUNT(jod_jobs.id) AS total_jobs, DATE_FORMAT(job_start_date, '%Y-%m') as pd 
from `jod_jobs`
inner join `slot_user` on `jod_jobs`.`id` = `slot_user`.`job_id`
where
`company_id` = 1
and `job_start_date` between '2023-11-01 00:00:00' and '2024-10-31 23:59:59'
and `slot_user`.`status` = 8
and `jod_jobs`.`deleted_at` is null
group by `pd`
order by `pd` asc;

iii. Explain optimized query

-> Sort: pd  (actual time=7.56..7.56 rows=12 loops=1)
-> Table scan on <temporary> (actual time=7.54..7.54 rows=12 loops=1)
-> Aggregate using temporary table (actual time=7.54..7.54 rows=12 loops=1)
-> Nested loop inner join (cost=368 rows=60.1) (actual time=5.23..7.45 rows=113 loops=1)
-> Filter: ((jod_jobs.job_start_date between '2023-11-01 00:00:00' and '2024-10-31 23:59:59') and (jod_jobs.deleted_at is null)) (cost=273 rows=117) (actual time=5.18..6.02 rows=608 loops=1)
-> Index lookup on jod_jobs using jod_jobs_company_id_foreign (company_id=1) (cost=273 rows=4476) (actual time=0.0799..4.69 rows=4476 loops=1)
-> Filter: (slot_user.`status` = 8) (cost=0.609 rows=0.516) (actual time=0.00218..0.00226 rows=0.186 loops=608)
-> Index lookup on slot_user using slot_user_job_id_foreign (job_id=jod_jobs.id) (cost=0.609 rows=2.06) (actual time=0.00195..0.00215 rows=0.776 loops=608)

iv. Query log time consumption

9.25 ms

v. Conclusion

  • By using a simpler query, the results obtained are the same as the current query.
  • Query optimization proved to be more effective, because it reduced the query execution time from 48 - 52 milliseconds to 7 milliseconds.
  • By using a join to the job_id field in the user_slot table, it proved to be more efficient than having to join the slots table first.
Before Execution TimeAfter Execution TimePercentage
35.25 ms9.25 ms73.76%

Summary of Optimizations

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Get Overview Total Jobs9.7 ms2.88 ms70.31%---
Get job chart #1 (5 queries)131.5 ms18.22 ms86.14%---
Get job chart #235.25 ms9.25 ms73.76%---

Job Types

Description

Get data for page home dashboard and return data summary job type
This query can be simplified with a single query instead of multiple queries.

Optimizing Status

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/jobtype/summary
  • Method: GET
  • Permission: summary-job-type
  • Possible Role Type: SUPER_ADMIN, USER_ADMIN, INTERNAL

Response

{
"data": {
"total": <int>, // total all job type in database
"total_enable": <int>, // total enabled job type
"total_disable": <int> // total disabled job type
},
"message": "Retrieved successfully"
}

1. Get all job types

a. The query

select count(*) as aggregate from `job_types`

b. Explain Query

-> Count rows in job_types  (actual time=7.08..7.08 rows=1 loops=1)

c. Query log time consumption

4.94 ms

2. Get enabled job types

a. The query

select count(*) as aggregate from `job_types` where `status` = 1

b. Explain Query

-> Aggregate: count(0)  (cost=88.1 rows=1) (actual time=0.301..0.301 rows=1 loops=1)
-> Filter: (job_types.`status` = 1) (cost=80.3 rows=78.3) (actual time=0.0209..0.265 rows=781 loops=1)
-> Table scan on job_types (cost=80.3 rows=783) (actual time=0.02..0.204 rows=783 loops=1)

c. Query log time consumption

0.49 ms

3. Get disabled job types

a. The query

select count(*) as aggregate from `job_types` where `status` = 0

b. Explain Query

-> Aggregate: count(0)  (cost=88.1 rows=1) (actual time=0.185..0.185 rows=1 loops=1)
-> Filter: (job_types.`status` = 0) (cost=80.3 rows=78.3) (actual time=0.178..0.183 rows=2 loops=1)
-> Table scan on job_types (cost=80.3 rows=783) (actual time=0.0233..0.155 rows=783 loops=1)

c. Query log time consumption

0.5 ms

Optimizations

1. Add index

ALTER TABLE `jod`.`job_types` ADD INDEX (`status`);

2. Optimize query

select count(id) as aggregate, status
from `job_types`
group by status;

3. Explain query

-> Group aggregate: count(job_types.id)  (cost=159 rows=2) (actual time=0.0231..0.176 rows=2 loops=1)
-> Covering index scan on job_types using status (cost=80.3 rows=783) (actual time=0.019..0.139 rows=783 loops=1)

4. Query log time consumption

4.99 ms

5. Conclusion

  • Aggregate queries with GROUP BY are usually better in terms of efficiency, resource consumption, and scalability, especially for large tables.
  • It may be worth considering using a cache with a long expire time. since the job type should rarely change.
  • Since changes are rare, it is worth considering using a cache with a long expiration.

Implementation into code.

Without cache

Modify App\Http\Controllers\JobTypeController::jobtypeSummaryByStatus

public function jobtypeSummaryByStatus(SummaryJobTypeByStatusRequest $request)
{
// get all in single query
$rows = JobType::select(DB::raw('count(id) as aggregate'), 'status')
->groupBy('status')
->get();

// init response fields
$total = 0;
$totalEnable = 0;
$totalDisable = 0;

// render data
foreach ($rows as $row) {
$total += $row->aggregate;
if ($row->status == 1) {
$totalEnable += $row->aggregate;
} else {
$totalDisable += $row->aggregate;
}
}

return response()->json([
'data' => [
'total' => $total,
'total_enable' => $totalEnable,
'total_disable' => $totalDisable,
],
'message' => __('messages.portal.jobtypes.success.jobtypeSummaryByStatus')
], 200);
}

With cache

Modify App\Http\Controllers\JobTypeController::jobtypeSummaryByStatus

public function jobtypeSummaryByStatus(SummaryJobTypeByStatusRequest $request)
{
$data = Cache::remember('job-types-summary-by-status', 86400, function () {
// get all in single query
$rows = JobType::select(DB::raw('count(id) as aggregate'), 'status')
->groupBy('status')
->get();

// init response fields
$total = 0;
$totalEnable = 0;
$totalDisable = 0;

// render data
foreach ($rows as $row) {
$total += $row->aggregate;
if ($row->status == 1) {
$totalEnable += $row->aggregate;
} else {
$totalDisable += $row->aggregate;
}
}

return [
'total' => $total,
'total_enable' => $totalEnable,
'total_disable' => $totalDisable,
];
});

return response()->json([
'data' => $data,
'message' => __('messages.portal.jobtypes.success.jobtypeSummaryByStatus')
], 200);
}

Summary of optimizations

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Without cache5.93 ms4.9915.85%---
sidebar_label: 'Jod credit available with HQ'

Jod credit available with HQ

Description

Jod credit available with HQ

Optimizing Status

Already optimized

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/credit/jodCreditAvailableHq
  • Method: GET
  • Permission: view-jod-credit-available-hq
  • Possible Role Type: HQ, AREA, LOCATION

Response

{
"data": {
"available_credits": <double>,
},
"message": "Retrieved successfully"
}

1. Get sum available credit

a. The query

company_id using same company_id from users table

select * from `companies` where `companies`.`id` = ? and `companies`.`deleted_at` is null limit 1

b. Explain Query

-> Limit: 1 row(s)  (cost=0..0 rows=1) (actual time=912e-6..992e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=62e-6..62e-6 rows=1 loops=1)

c. Query log time consumption

0.864 ms

d. Conclusion

No need for optimization, this query already optimize using primary key--- sidebar_label: 'Locations'

Locations

Description

Get data summary location by status.
This query can be simplified with a single query instead of multiple queries.

Optimizing Status

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/location/summary
  • Method: GET
  • Permission: summary-location-by-status
  • Possible Role Type: SUPER_ADMIN, USER_ADMIN, INTERNAL

Response

{
"data": {
"total": <int>, // total all locations in database
"total_enable": <int>, // total enabled locations
"total_disable": <int> // total disabled locations
},
"message": "Retrieved successfully"
}

1. Get all locations

a. The query

select count(*) as aggregate from `locations` where `locations`.`deleted_at` is null

b. Explain Query

-> Aggregate: count(0)  (cost=473 rows=1) (actual time=1.26..1.26 rows=1 loops=1)
-> Filter: (locations.deleted_at is null) (cost=432 rows=408) (actual time=0.0276..1.12 rows=4070 loops=1)
-> Table scan on locations (cost=432 rows=4077) (actual time=0.0269..0.909 rows=4070 loops=1)

c. Query log time consumption

6.14 ms

2. Get enabled locations

a. The query

select count(*) as aggregate from `locations` where `status` = 1 and `locations`.`deleted_at` is null;

b. Explain Query

-> Aggregate: count(0)  (cost=436 rows=1) (actual time=1.98..1.98 rows=1 loops=1)
-> Filter: ((locations.`status` = 1) and (locations.deleted_at is null)) (cost=432 rows=40.8) (actual time=0.038..1.79 rows=4065 loops=1)
-> Table scan on locations (cost=432 rows=4077) (actual time=0.0363..1.39 rows=4070 loops=1)

c. Query log time consumption

1.38 ms

3. Get disabled locations

a. The query

select count(*) as aggregate from `locations` where `status` = 0 and `locations`.`deleted_at` is null;

b. Explain Query

-> Aggregate: count(0)  (cost=436 rows=1) (actual time=1.13..1.13 rows=1 loops=1)
-> Filter: ((locations.`status` = 0) and (locations.deleted_at is null)) (cost=432 rows=40.8) (actual time=0.03..1.13 rows=5 loops=1)
-> Table scan on locations (cost=432 rows=4077) (actual time=0.0257..0.957 rows=4070 loops=1)

c. Query log time consumption

1.05 ms

Optimizations

1. Add index

ALTER TABLE `locations` ADD INDEX (`status`, `deleted_at`);

2. Optimize query

select count(id) as aggregate, status
from `locations`
where deleted_at is null
group by status;

3. Explain query

-> Group aggregate: count(locations.id)  (cost=473 rows=2) (actual time=0.0374..1.8 rows=2 loops=1)
-> Filter: (locations.deleted_at is null) (cost=432 rows=408) (actual time=0.0308..1.51 rows=4070 loops=1)
-> Covering index scan on locations using status (cost=432 rows=4077) (actual time=0.0299..1.17 rows=4070 loops=1)

4. Query log time consumption

1.91 ms

5. Conclusion

  1. Aggregate queries with GROUP BY are usually better in terms of efficiency, resource consumption, and scalability, especially for large tables.
  2. It may be worth considering using a cache with a long expire time, and deleted when updating or inserting row.

Implementation into code.

Update App\Http\Controllers\LocationController::locationSummaryByStatus

Without cache

public function locationSummaryByStatus(SummaryLocationByStatusRequest $request)
{
$result = Location::select(DB::raw('count(*) as aggregate'), 'status')
->groupBy('status')
->get();

$total = 0;
$totalEnable = 0;
$totalDisable = 0;
foreach ($result as $row) {
$total += $row->aggregate;
if ($row->status == 1) {
$totalEnable += $row->aggregate;
} else {
$totalDisable += $row->aggregate;
}
}

return response()->json([
'data' => [
'total' => $total,
'total_enable' => $totalEnable,
'total_disable' => $totalDisable,
],
'message' => __('messages.portal.locations.success.summary')
], 200);
}

With cache

public function locationSummaryByStatus(SummaryLocationByStatusRequest $request)
{
$data = Cache::remember('location-summary-by-status', 86400, function () {
$result = Location::select(DB::raw('count(*) as aggregate'), 'status')
->groupBy('status')
->get();

$total = 0;
$totalEnable = 0;
$totalDisable = 0;
foreach ($result as $row) {
$total += $row->aggregate;
if ($row->status == 1) {
$totalEnable += $row->aggregate;
} else {
$totalDisable += $row->aggregate;
}
}

return [
'total' => $total,
'total_enable' => $totalEnable,
'total_disable' => $totalDisable,
];
});

return response()->json([
'data' => $data,
'message' => __('messages.portal.locations.success.summary')
], 200);
}

Summary of optimizations

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Without cache8.57 ms1.91 ms77.71 %---

Payment statistics

Description

Get processed and unprocessed credits of last 12 months
This query can be simplified with a single query instead of multiple queries.

Optimizing Status

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/dashboard/payment-statistics
  • Method: GET
  • Permission: get-payment-statistics
  • Possible Role Type: SUPER_ADMIN, USER_ADMIN, INTERNAL

Response

{
"data": [
{
"processed_credits": <double>,
"month_number": <int>,
"month": "<month 3 chars>",
"year": <int>,
"unprocessed_credits": <double>
},
...
],
"message": "Retrieved successfully"
}

1. Get unprocessed credits

Payments table with payment_status = 1

a. The query

select 
COALESCE(
ROUND(
SUM(
CASE WHEN payments.payment_status = 1
THEN payments.admin_jod_credit
ELSE 0
END
),
2
),
0
) AS unprocessed_credits,
last_12_months.month_number as month_number,
last_12_months.month as month,
last_12_months.year as year
from `payments` right join (
SELECT '2023-11' AS year_and_month, 2023 AS year, 'Nov' AS month, 11 AS month_number
union
SELECT '2023-12' AS year_and_month, 2023 AS year, 'Dec' AS month, 12 AS month_number
union
SELECT '2024-01' AS year_and_month, 2024 AS year, 'Jan' AS month, 1 AS month_number
union
SELECT '2024-02' AS year_and_month, 2024 AS year, 'Feb' AS month, 2 AS month_number
union
SELECT '2024-03' AS year_and_month, 2024 AS year, 'Mar' AS month, 3 AS month_number
union
SELECT '2024-04' AS year_and_month, 2024 AS year, 'Apr' AS month, 4 AS month_number
union
SELECT '2024-05' AS year_and_month, 2024 AS year, 'May' AS month, 5 AS month_number
union
SELECT '2024-06' AS year_and_month, 2024 AS year, 'Jun' AS month, 6 AS month_number
union
SELECT '2024-07' AS year_and_month, 2024 AS year, 'Jul' AS month, 7 AS month_number
union
SELECT '2024-08' AS year_and_month, 2024 AS year, 'Aug' AS month, 8 AS month_number
union
SELECT '2024-09' AS year_and_month, 2024 AS year, 'Sep' AS month, 9 AS month_number
union
SELECT '2024-10' AS year_and_month, 2024 AS year, 'Oct' AS month, 10 AS month_number
) as last_12_months on DATE_FORMAT(payments.created_at, "%Y-%m") = `last_12_months`.`year_and_month`
group by `last_12_months`.`year`, `last_12_months`.`month`, `last_12_months`.`month_number`
order by `last_12_months`.`year` asc, `last_12_months`.`month_number` asc;

b. Explain Query

-> Sort: last_12_months.`year`, last_12_months.month_number  (actual time=1.62..1.62 rows=12 loops=1)
-> Table scan on <temporary> (actual time=1.61..1.61 rows=12 loops=1)
-> Aggregate using temporary table (actual time=1.61..1.61 rows=12 loops=1)
-> Left hash join (date_format(payments.created_at,'%Y-%m') = last_12_months.year_and_month) (cost=1834 rows=18144) (actual time=1.27..1.3 rows=537 loops=1)
-> Table scan on last_12_months (cost=1.42..3.85 rows=12) (actual time=0.0223..0.0238 rows=12 loops=1)
-> Union materialize with deduplication (cost=1.2..1.2 rows=12) (actual time=0.0214..0.0214 rows=12 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=117e-6..168e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=39e-6..62e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=41e-6..58e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..49e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..47e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=33e-6..49e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..49e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=31e-6..47e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=33e-6..49e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=36e-6..53e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=32e-6..48e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=39e-6..55e-6 rows=1 loops=1)
-> Hash
-> Table scan on payments (cost=13.1 rows=1512) (actual time=0.0258..0.474 rows=1506 loops=1)

c. Query log time consumption

2.7 ms

d. Optimization

i. Add index

ALTER TABLE `jod`.`payments` ADD INDEX ( `processed_date`, `created_at`, `payment_status`);

ii. Optimize query

Set first date of 12 months ago as query condition

SELECT sum(admin_jod_credit) as summary, DATE_FORMAT(created_at, '%Y-%m') as pd
FROM `payments`
WHERE
created_at >= '2023-12-01 00:00:00'
and payment_status = 1
group by pd
order by pd asc;

iii. Explain query

-> Sort: pd  (actual time=1.6..1.6 rows=11 loops=1)
-> Table scan on <temporary> (actual time=1.58..1.58 rows=11 loops=1)
-> Aggregate using temporary table (actual time=1.58..1.58 rows=11 loops=1)
-> Index lookup on payments using payment_status (payment_status=1), with index condition: (payments.created_at >= TIMESTAMP'2023-12-01 00:00:00') (cost=48.8 rows=842) (actual time=0.177..1.21 rows=272 loops=1)


iv. Query log time consumption

1.76 ms

v. Conclusion

  • Simplify the query by grouping by processed_date or created_at and adding an index for the query condition.
  • Reducing DATE_FORMAT Usage
    When possible, store dates in a format that is easier to filter or join (for example, separate years and months in YEAR and MONTH columns) so that DATE_FORMAT usage can be reduced or avoided. This will speed up performance, as DATE_FORMAT on columns often slows down queries.
Before Execution TimeAfter Execution TimePercentage
2.7 ms1.76 ms34.81%

2. Get processed credits

Payments table with payment_status = 2

a. The query

select 
COALESCE(
ROUND(
SUM(
CASE WHEN payments.payment_status = 2
THEN payments.admin_jod_credit
ELSE 0
END
),
2
),
0
) AS processed_credits,
last_12_months.month_number as month_number,
last_12_months.month as month,
last_12_months.year as year
from `payments`
right join (
SELECT '2023-11' AS year_and_month, 2023 AS year, 'Nov' AS month, 11 AS month_number
union
SELECT '2023-12' AS year_and_month, 2023 AS year, 'Dec' AS month, 12 AS month_number
union
SELECT '2024-01' AS year_and_month, 2024 AS year, 'Jan' AS month, 1 AS month_number
union
SELECT '2024-02' AS year_and_month, 2024 AS year, 'Feb' AS month, 2 AS month_number
union
SELECT '2024-03' AS year_and_month, 2024 AS year, 'Mar' AS month, 3 AS month_number
union
SELECT '2024-04' AS year_and_month, 2024 AS year, 'Apr' AS month, 4 AS month_number
union
SELECT '2024-05' AS year_and_month, 2024 AS year, 'May' AS month, 5 AS month_number
union
SELECT '2024-06' AS year_and_month, 2024 AS year, 'Jun' AS month, 6 AS month_number
union
SELECT '2024-07' AS year_and_month, 2024 AS year, 'Jul' AS month, 7 AS month_number
union
SELECT '2024-08' AS year_and_month, 2024 AS year, 'Aug' AS month, 8 AS month_number
union
SELECT '2024-09' AS year_and_month, 2024 AS year, 'Sep' AS month, 9 AS month_number
union
SELECT '2024-10' AS year_and_month, 2024 AS year, 'Oct' AS month, 10 AS month_number
) as last_12_months on DATE_FORMAT(payments.processed_date, "%Y-%m") = `last_12_months`.`year_and_month`
group by `last_12_months`.`year`, `last_12_months`.`month`, `last_12_months`.`month_number`
order by `last_12_months`.`year` asc, `last_12_months`.`month_number` asc;

b. Explain Query

-> Sort: last_12_months.`year`, last_12_months.month_number  (actual time=1.89..1.89 rows=12 loops=1)
-> Table scan on <temporary> (actual time=1.87..1.88 rows=12 loops=1)
-> Aggregate using temporary table (actual time=1.87..1.87 rows=12 loops=1)
-> Left hash join (date_format(payments.processed_date,'%Y-%m') = last_12_months.year_and_month) (cost=1834 rows=18144) (actual time=1.57..1.61 rows=221 loops=1)
-> Table scan on last_12_months (cost=1.42..3.85 rows=12) (actual time=0.0325..0.035 rows=12 loops=1)
-> Union materialize with deduplication (cost=1.2..1.2 rows=12) (actual time=0.0312..0.0312 rows=12 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=90e-6..159e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=50e-6..68e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=49e-6..72e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=46e-6..68e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=38e-6..60e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=55e-6..81e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=44e-6..77e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=52e-6..70e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=48e-6..71e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=52e-6..71e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=40e-6..62e-6 rows=1 loops=1)
-> Rows fetched before execution (cost=0..0 rows=1) (actual time=47e-6..77e-6 rows=1 loops=1)
-> Hash
-> Table scan on payments (cost=13.1 rows=1512) (actual time=0.0661..0.952 rows=1506 loops=1)

c. Query log time consumption

4.74 ms

d. Optimization

i. Add index

Same like above

ALTER TABLE `jod`.`payments` ADD INDEX ( `processed_date`, `created_at`, `payment_status`);

ii. Optimize query

Set first date of 12 months ago as query condition

SELECT sum(admin_jod_credit) as summary, DATE_FORMAT(processed_date, '%Y-%m') as pd
FROM `payments`
WHERE
processed_date >= '2023-12-01 00:00:00'
and payment_status = 2
group by pd
order by pd asc;

iii. Explain query

-> Sort: pd  (actual time=0.765..0.766 rows=11 loops=1)
-> Table scan on <temporary> (actual time=0.747..0.749 rows=11 loops=1)
-> Aggregate using temporary table (actual time=0.746..0.746 rows=11 loops=1)
-> Index range scan on payments using payment_status over (payment_status = 2 AND '2023-12-01 00:00:00' <= processed_date), with index condition: ((payments.payment_status = 2) and (payments.processed_date >= TIMESTAMP'2023-12-01 00:00:00')) (cost=97 rows=215) (actual time=0.0443..0.567 rows=215 loops=1)

iv. Query log time consumption

4.37 ms

v. Conclusion

  • Simplify the query by grouping by processed_date or created_at and adding an index for the query condition.
  • Reducing DATE_FORMAT Usage
    When possible, store dates in a format that is easier to filter or join (for example, separate years and months in YEAR and MONTH columns) so that DATE_FORMAT usage can be reduced or avoided. This will speed up performance, as DATE_FORMAT on columns often slows down queries.
Before Execution TimeAfter Execution TimePercentage
4.74 ms4.37 ms7.8%

Implementation into code.

Without cache

Add App\Traits\YearMonthDayTrait trait to App\Services\StatisticsService

class StatisticsService extends BaseService implements StatisticsServiceInterface
{
use YearMonthDayTrait;
// other codes
}

Modify App\Services\StatisticsService::getPaymentStatistics

public function getPaymentStatistics()
{
// get last 12months
$months = $this->getLast12MonthsForService();

// get processed & unprocessed payments in last 12 months
$processedPayments = $this->paymentRepo->getPaymentsChart(false, $months);
$unprocessedPayments = $this->paymentRepo->getPaymentsChart(true, $months);

// render result
$result = [];
foreach ($months as $month) {
// init value
$processed = 0;
$unprocessed = 0;

// find value in collection
$handleProcessed = $processedPayments->where('pd', $month['year_and_month'])->first();
if (!empty($handleProcessed)) {
$processed = round($handleProcessed->summary, 2);
}
$handleUnprocessed = $unprocessedPayments->where('pd', $month['year_and_month'])->first();
if (!empty($handleUnprocessed)) {
$unprocessed = round($handleUnprocessed->summary, 2);
}

$result[] = [
'processed_credits' => $processed,
'month_number' => (int) $month['month_number'],
'month' => $month['month'],
'year' => (int) $month['year'],
'unprocessed_credits' => $unprocessed,
];
}

return [
'response_data' => [
'data' => $result,
'message' => __('messages.payments.success.payslipRequest'),
],
'status' => 200
];
}

Modify App\Repositories\PaymentRepository::getPaymentsChart

public function getPaymentsChart($unprocessed = false, array $months);

Modify App\Repositories\PaymentRepositoryEloquent::getPaymentsChart

public function getPaymentsChart($unprocessed = false, array $months)
{
// summary date based on status
$targetDt = $unprocessed ? 'created_at':'processed_date';
return $this->select(DB::raw('sum(admin_jod_credit) as summary'), DB::raw("DATE_FORMAT($targetDt, '%Y-%m') as pd"))
->where($targetDt, '>=', $months[0]['year_and_month'] . '-01 00:00:00')
->where('payment_status', $unprocessed ? Constants::PAYMENT_STATUS_UNPROCESSED : Constants::PAYMENT_STATUS_PROCESSED)
->groupBy('pd')
->orderBy('pd')
->get();
}

With cache

Same as without cache, except modify App\Services\StatisticsService::getPaymentStatistics

public function getPaymentStatistics()
{
$service = $this;
$result = Cache::remember('payment-statistics', 86400, function () use ($service) {
// get last 12months
$months = $service->getLast12MonthsForService();

// get processed & unprocessed payments in last 12 months
$processedPayments = $service->paymentRepo->getPaymentsChart(false, $months);
$unprocessedPayments = $service->paymentRepo->getPaymentsChart(true, $months);

// render result
$result = [];
foreach ($months as $month) {
// init value
$processed = 0;
$unprocessed = 0;

// find value in collection
$handleProcessed = $processedPayments->where('pd', $month['year_and_month'])->first();
if (!empty($handleProcessed)) {
$processed = round($handleProcessed->summary, 2);
}
$handleUnprocessed = $unprocessedPayments->where('pd', $month['year_and_month'])->first();
if (!empty($handleUnprocessed)) {
$unprocessed = round($handleUnprocessed->summary, 2);
}

$result[] = [
'processed_credits' => $processed,
'month_number' => (int) $month['month_number'],
'month' => $month['month'],
'year' => (int) $month['year'],
'unprocessed_credits' => $unprocessed,
];
}
return $result;
});

return [
'response_data' => [
'data' => $result,
'message' => __('messages.payments.success.payslipRequest'),
],
'status' => 200
];
}

Summary of optimizations

Seeing that this collects past data, it is best to cache the data with an expiration date of at least 1 day.

QueryBefore (Local)AfterPercentageBefore (Prod)AfterPercentage
Get unprocessed credits2.7 ms1.76 ms34.81 %---
Get processed credits4.74 ms4.37 ms7.8%---
sidebar_label: 'Rating Overview'

Rating Overview

Description

Get average rating of a company

Optimizing Status

Already optimized

  • Development
  • QA2
  • Production

Contract

  • URL: /api/portal/dashboard/rating-overview
  • Method: GET
  • Permission: get-rating-overview
  • Possible Role Type: HQ, AREA, LOCATION

Response

{
"data":{
"average_rating":<float>
},
"message":"Retrieved successfully"
}

1. Get average rating for HQ role

a. The query

select
COALESCE(ROUND(AVG(job_user.applicant_rating_job), 2), 0) as average_rating
from
`job_user`
inner join `jod_jobs` on `jod_jobs`.`id` = `job_user`.`job_id`
where
`jod_jobs`.`company_id` = <current user company_id>
limit 1;

b. Explain Query

-> Limit: 1 row(s)  (cost=2785 rows=1) (actual time=11.3..11.3 rows=1 loops=1)
-> Aggregate: avg(job_user.applicant_rating_job) (cost=2785 rows=1) (actual time=11.3..11.3 rows=1 loops=1)
-> Nested loop inner join (cost=2544 rows=2410) (actual time=0.0591..11.1 rows=2684 loops=1)
-> Table scan on job_user (cost=572 rows=5634) (actual time=0.0421..1.7 rows=5632 loops=1)
-> Filter: ((jod_jobs.company_id = 1) and (jod_jobs.id = job_user.job_id)) (cost=0.25 rows=0.428) (actual time=0.00154..0.00157 rows=0.477 loops=5632)
-> Single-row index lookup on jod_jobs using PRIMARY (id=job_user.job_id) (cost=0.25 rows=1) (actual time=0.00139..0.00141 rows=1 loops=5632)

c. Query log time consumption

39.37 ms

d. Conclusion

  1. No optimization needed, this query is already optimized using indexed keys.
  2. There are queries for AREA & LOCATION roles, but the permission verification is not valid for both roles.