How Often to Post to LinkedIn

insights
Author

Julian Winternheimer

Published

August 1, 2025

Overview

This analysis follows up on a previous analysis I did on posting frequency and follower growth on Instagram. In this analysis we’ll focus on LinkedIn, with the aim of trying to find a sweet spot for the amount of posts one should share in a given week.

What We Found

In this analysis we find an interesting relationship between posting frequency and post performance on LinkedIn. The data suggests that posting more frequently consistently leads to higher engagement rates and reach. However, as posting frequencies rise, reach per post tends to decline. We provide a few possible explanations for this later in the analysis, but this creates an interesting consideration for people deciding on how often to post on LinkedIn.

I also want to point out that, even though reach per post tends to decline as posting frequency rises, both engagement rate and total reach rise as people post more frequently

These are the key findings:

Posting 2-5 times per week is probably the sweet spot for most people. Accounts that posted just once per week consistently underperformed their own baseline across the engagement metrics we looked at, with the step from 1 post to 2-5 posts per week resulting in substantial increases in engagement rates and minimal decreases in reach per post.

Higher posting frequencies result in higher engagement rates, but lower reach per post. Accounts that post 6-10 times per week and 11+ times per week achieve progressively higher engagement rates per post, but they also experience significant declines in reach per post.

The algorithm may optimize for higher engagement rates, but there’s no way for us to know for sure. As posting frequency increases, LinkedIn’s algorithm could prioritize showing posts to audiences more likely to engage, resulting in higher engagement rates but lower reach. If this was the case, the algorithm could be designed to prevent an account’s followers from seeing too many of a single account’s posts if they aren’t inclined to engage with it.

Methodology

This analysis examines how posting frequency affects per-post performance on LinkedIn, specifically reach per post, the number of engagements per post, and the engagement rate per post.

We analyzed approximately 2 million posts shared in the last year by LinkedIn accounts connected to Buffer. We used Z-scores and fixed effects regression to control for account-level difference and compare each account’s performance to it’s own baseline over time.

Data Collection

The SQL query below returns LinkedIn posting and engagement data for qualified profiles that posted in at least 4 weeks in the past year and received engagement.

Code
sql <- "
with qualified_linkedin_profiles as (
  --linkedin profiles that posted in at least 4 weeks in the past year 
  -- and got engagement on their posts
  select 
    up.profile_id
    , count(distinct timestamp_trunc(up.sent_at, week)) as weeks_with_posts
    , sum(up.engagements) as total_engagements
    , count(distinct up.id) as total_posts
  from dbt_buffer.publish_updates as up
  where up.profile_service = 'linkedin'
    and up.sent_at >= timestamp_sub(current_timestamp, interval 365 day)
    and up.engagements > 5
  group by 1
  having count(distinct timestamp_trunc(up.sent_at, week)) >= 4
)

, posts_with_median as (
  select 
    up.profile_id
    , timestamp_trunc(up.sent_at, week) as week
    , up.id
    , up.reach
    , up.likes
    , up.comments
    , up.engagements
    , up.engagement_rate
    , up.clicks
    , up.shares
    , percentile_cont(up.engagement_rate, 0.5) over (partition by up.profile_id, timestamp_trunc(up.sent_at, week)) as median_engagement_rate
    , percentile_cont(up.reach, 0.5) over (partition by up.profile_id, timestamp_trunc(up.sent_at, week)) as median_reach
  from dbt_buffer.publish_updates as up
  inner join qualified_linkedin_profiles as qlp
    on up.profile_id = qlp.profile_id
  where up.profile_service = 'linkedin'
    and up.sent_at >= timestamp_sub(current_timestamp, interval 365 day)
    and up.engagements > 5
)

select 
  profile_id as channel_id
  , week
  , count(distinct id) as posts
  , sum(reach) as total_reach
  , sum(likes) as total_likes
  , sum(comments) as total_comments
  , sum(engagements) as total_engagements
  , avg(engagement_rate) as avg_engagement_rate
  , avg(reach) as avg_reach
  , max(median_engagement_rate) as median_engagement_rate
  , max(median_reach) as median_reach
  , sum(clicks) as total_clicks
  , sum(shares) as total_shares
from posts_with_median
group by 1, 2
"

# get data from BigQuery
posts <- bq_query(sql = sql)

We’ll group the number of posts sent in a given week into five separate buckets:

  • 1 Post
  • 2-5 Posts
  • 6-10 Posts
  • 11+ Posts
Code
# create post frequency bins
posts <- posts %>% 
  mutate(posting_frequency_bin = case_when(
    posts == 1 ~ '1 Post',
    posts %in% 2:5 ~ '2-5 Posts',
    posts %in% 6:10 ~ '6-10 Posts',
    posts > 10 ~ '11+ Posts'),
    posting_frequency_bin = factor(posting_frequency_bin,
                                   levels = c("1 Post",
                                              "2-5 Posts", 
                                              "6-10 Posts",
                                              "11+ Posts")))

# calculate engagements per post
engagement_per_post <- posts %>% 
  mutate(engagements_per_post = total_engagements / posts,
         posting_frequency_bin = relevel(posting_frequency_bin, ref = "1 Post"))

Per-Post Performance Analysis

Our primary focus is on per-post metrics, as these more accurately measure the efficiency of posting strategies rather than the effect of more volume.

Code
# calculate summary statistics for per-post metrics (excluding No Posts weeks)
engagement_per_post %>% 
  group_by(posting_frequency_bin) %>% 
  summarise(
    n_observations = n(),
    avg_reach_per_post = mean(avg_reach, na.rm = TRUE),
    median_reach_per_post = median(avg_reach, na.rm = TRUE),
    avg_engagements_per_post = mean(engagements_per_post, na.rm = TRUE),
    median_engagements_per_post = median(engagements_per_post, na.rm = TRUE),
    avg_engagement_rate = mean(avg_engagement_rate, na.rm = TRUE),
    median_engagement_rate = median(median_engagement_rate, na.rm = TRUE)
  )
# A tibble: 4 × 8
  posting_frequency_bin n_observations avg_reach_per_post median_reach_per_post
  <fct>                          <int>              <dbl>                 <dbl>
1 1 Post                        883451               694.                  264 
2 2-5 Posts                     979878               869.                  367 
3 6-10 Posts                    114447              1346.                  529.
4 11+ Posts                      42836              2616.                  852.
# ℹ 4 more variables: avg_engagements_per_post <dbl>,
#   median_engagements_per_post <dbl>, avg_engagement_rate <dbl>,
#   median_engagement_rate <dbl>

The plot below shows that the average reach per post increases as the posting frequency increases. This isn’t surprising, but we should be careful to account for confounders. It’s likely the case that larger accounts that get more engagement just tend to post more frequently. We’ll account for this in a moment.

Code
# plot reach per post by posting frequency
engagement_per_post %>% 
  group_by(posting_frequency_bin) %>% 
  summarise(avg_reach_per_post = mean(avg_reach, na.rm = TRUE)) %>% 
  ggplot(aes(x = posting_frequency_bin, y = avg_reach_per_post)) +
  geom_col(show.legend = FALSE) +
  scale_y_continuous(labels = comma) +
  labs(x = "Weekly Posts Shared",
       y = "Average Reach Per Post",
       title = "Average Reach Per Post by Posting Frequency",
       subtitle = "LinkedIn Profiles - Excluding No Posts Weeks")

The plot below shows the average number of engagements per post by posting frequency. Again, no surprises here.

Code
# plot engagements per post by posting frequency
engagement_per_post %>% 
  group_by(posting_frequency_bin) %>% 
  summarise(avg_engagements_per_post = mean(engagements_per_post, na.rm = TRUE)) %>% 
  ggplot(aes(x = posting_frequency_bin, y = avg_engagements_per_post)) +
  geom_col(show.legend = FALSE) +
  scale_y_continuous(labels = comma) +
  labs(x = "Weekly Posts Shared",
       y = "Average Engagements Per Post",
       title = "Average Engagements Per Post by Posting Frequency",
       subtitle = "LinkedIn Profiles - Excluding No Posts Weeks")

The plot below shows the average engagement rate by posting frequency. Here we see something surprising. As posting frequencies rise, average engagement rates seem to fall. The last chart showed us that the total number of engagements rose as posting frequencies rose, but it may be the case that engagement rates don’t show the same pattern.

Again, we need to control for account-level differences before coming to any conclusions.

Code
# Visualize engagement rate by posting frequency
engagement_per_post %>% 
  group_by(posting_frequency_bin) %>% 
  summarise(avg_engagement_rate = mean(avg_engagement_rate, na.rm = TRUE)) %>% 
  ggplot(aes(x = posting_frequency_bin, y = avg_engagement_rate)) +
  geom_col(show.legend = FALSE) +
  scale_y_continuous(labels = percent_format(scale = 1)) +
  labs(x = "Weekly Posts Shared",
       y = "Average Engagement Rate",
       title = "Average Engagement Rate by Posting Frequency", 
       subtitle = "LinkedIn Profiles - Excluding No Posts Weeks")

Statistical Challenges and Our Approach

These initial relationships show interesting patterns, but they may not tell us the full story. We need to control for differences in accounts’ inherent differences, such as follower counts.

To illustrate the need for controlling these factors, imagine a high-engagement account that naturally gets strong per-post performance and tends to post frequently, versus a smaller account with modest per-post performance that posts less often. If we simply compare across accounts, we might incorrectly conclude that posting frequency drives per-post performance, when in reality it could be that high-performing accounts simply tend to post more.

To address this issue, we use statistical methods that compare each account against itself over time, basically asking the question “when this account posts more or less frequently, how does their per-post performance change?” This within-account comparison controls for inherent differences between accounts.

How We Account for Inherent Account Differences

We use two complementary approaches to ensure you’re measuring the true effect of posting frequency:

Z-Score Analysis Instead of comparing different channels to each other, we compare each channel to its own typical per-post performance. The Z-score tells us how much better or worse a channel performed compared to its own average.

For example, if Channel A typically gets 500 reach per post and Channel B typically gets 100 reach per post, a 300 reach-per-post week would be below average for Channel A, represented by a negative Z-score, and above average for Channel B, represented by a positive Z-score.

Fixed Effects Regression This statistical technique controls for all time-invariant account characteristics. It attempts to answer the question “when the same channel varies its posting frequency across different weeks, how does this affect their per-post performance?”

This method compares each channel’s high-frequency weeks to their own low-frequency weeks, controlling for the inherent differences between accounts that remain constant over time.

Z-Score Analysis

This is the approach to calculate Z-scores for per-post metrics:

  1. Calculate the mean and standard deviation of per-post metrics across all weeks for each channel (excluding no-post weeks)

  2. Calculate a Z-score for each week. Z = (metric value - average metric) ÷ standard deviation

For each channel and week, the Z-score tells us how many standard deviations above or below average a channel performed:

Z = 0: Typical per-post performance for this channel Z = +1: Strong positive week (better than ~84% of weeks) Z = +2: Exceptional week (better than ~97% of weeks)
Z = -1: Poor week (worse than ~84% of weeks)

We’ll calculate Z-scores for reach per post, engagements per post, and engagement rate.

Code
# calculate channel-specific mean and standard deviation for per-post metrics
channel_stats_per_post <- engagement_per_post %>%
  group_by(channel_id) %>%
  summarise(
    mean_reach_per_post = mean(avg_reach, na.rm = TRUE),
    sd_reach_per_post = sd(avg_reach, na.rm = TRUE),
    mean_engagements_per_post = mean(engagements_per_post, na.rm = TRUE),
    sd_engagements_per_post = sd(engagements_per_post, na.rm = TRUE),
    mean_eng_rate = mean(avg_engagement_rate, na.rm = TRUE),
    sd_eng_rate = sd(avg_engagement_rate, na.rm = TRUE),
    n_weeks = n()
  ) %>%
  # only keep channels with sufficient data and variation
  filter(n_weeks >= 3, 
         sd_reach_per_post > 0, 
         sd_engagements_per_post > 0,
         sd_eng_rate > 0)

# merge back and calculate Z-scores for per-post metrics
engagement_per_post_with_z <- engagement_per_post %>%
  inner_join(channel_stats_per_post, by = "channel_id") %>%
  mutate(
    z_score_reach_per_post = (avg_reach - mean_reach_per_post) / sd_reach_per_post,
    z_score_engagements_per_post = (engagements_per_post - mean_engagements_per_post) / sd_engagements_per_post,
    z_score_eng_rate = (avg_engagement_rate - mean_eng_rate) / sd_eng_rate
  )

# calculate summary statistics for reach per post
engagement_per_post_with_z %>%
  group_by(posting_frequency_bin) %>%
  summarise(
    mean_engagements_z = mean(z_score_engagements_per_post),
    mean_eng_rate_z = mean(z_score_eng_rate),
    mean_z_score_reach_per_post = mean(z_score_reach_per_post, na.rm = TRUE))
# A tibble: 4 × 4
  posting_frequency_bin mean_engagements_z mean_eng_rate_z
  <fct>                              <dbl>           <dbl>
1 1 Post                           -0.0196         -0.0346
2 2-5 Posts                         0.0137          0.0202
3 6-10 Posts                        0.0260          0.0647
4 11+ Posts                         0.0234          0.0804
# ℹ 1 more variable: mean_z_score_reach_per_post <dbl>
Code
# plot z-scores for enagement rate
engagement_per_post_with_z %>%
  group_by(posting_frequency_bin) %>%
  summarise(
    mean_engagements_z = mean(z_score_engagements_per_post),
    mean_eng_rate_z = mean(z_score_eng_rate),
    mean_z_score_reach_per_post = mean(z_score_reach_per_post, na.rm = TRUE)) %>% 
  ggplot(aes(x = posting_frequency_bin, y = mean_eng_rate_z)) +
  geom_col(show.legend = F) +
  labs(x = "Posting Frequency", y = NULL,
       title = "Average Engagement Rate Z-Score by Posting Frequency",
       subtitle = "LinkedIn Profiles -- Excluding Weeks with No Posts")

The Z-score analysis gives us the first evidence of the complex relationship between posting frequency and different performance metrics. When we compare each account to its own baseline performance, we see divergent patterns for engagement versus reach metrics.

For engagement metrics, accounts that post just once per week consistently perform worse than their own baseline, showing negative Z-scores for both engagements per post and engagement rates. This pattern reverses as posting frequency increases, with accounts posting 2-5 times per week showing slight improvements over their baseline, while those posting 6-10 times per week performing even better. The strongest engagement effect comes at 11+ posts per week, where accounts achieve their highest engagement rates relative to their own typical performance.

However, reach per post shows the opposite trend. Accounts posting just once per week actually perform slightly above their own baseline for reach per post, while higher posting frequencies lead to progressively less reach per post compared to their baselines.

Next we’ll employ fixed effects regression models as another method to control for account-level differences.

Fixed Effects Regression Models for Per-Post Metrics

Fixed effects regression compares each channel against itself over time, controlling for all time-invariant characteristics of accounts. This allows us to isolate the effect of posting frequency on per-post performance.

Reference Category: All coefficients are interpreted relative to the “1 Post” per week category, which serves as our baseline. Positive coefficients indicate better per-post performance than the 1-2 posts baseline; negative coefficients indicate worse performance.

Model Specification: We use robust standard errors clustered at the channel level to account for potential correlation within accounts over time.

Engagement Rate Per Post Model

The fixed effects models provide the strongest evidence for causal effects, comparing each account against itself over time. All results are relative to posting just 1 time per week, and the results confirm the trends we observed in the Z-score analysis earlier.

Code
# fit fixed effects model
fe_model <- feols(avg_engagement_rate ~ posting_frequency_bin | channel_id, 
                   data = engagement_per_post, 
                   cluster = "channel_id")
# summarise model
summary(fe_model)
OLS estimation, Dep. Var.: avg_engagement_rate
Observations: 2,020,612
Fixed-effects: channel_id: 94,485
Standard-errors: Clustered (channel_id) 
                                Estimate Std. Error  t value  Pr(>|t|)    
posting_frequency_bin2-5 Posts  0.232717   0.023557  9.87882 < 2.2e-16 ***
posting_frequency_bin6-10 Posts 0.758123   0.052327 14.48806 < 2.2e-16 ***
posting_frequency_bin11+ Posts  1.404902   0.112994 12.43343 < 2.2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RMSE: 11.9     Adj. R2: 0.313663
             Within R2: 1.897e-4

The engagement rate model shows that accounts posting 2-5 times per week achieve 0.23 percentage points higher engagement rates per post compared to their single-post weeks. This effect grows progressively stronger, with accounts posting 6-10 times per week seeing 0.76 percentage points higher engagement rates, and those posting 11 or more times per week achieving 1.40 percentage points higher engagement rates per post. The main takeaway would be that posting more frequently tends to improve engagement rates.

Reach Per Post Model

Code
# fit reach fixed effects model
reach_fe_model <- feols(avg_reach ~ posting_frequency_bin | channel_id, 
                   data = engagement_per_post, 
                   cluster = "channel_id")
# summarise model
summary(reach_fe_model)
OLS estimation, Dep. Var.: avg_reach
Observations: 2,020,612
Fixed-effects: channel_id: 94,485
Standard-errors: Clustered (channel_id) 
                                 Estimate Std. Error  t value   Pr(>|t|)    
posting_frequency_bin2-5 Posts   -41.6072    21.8702 -1.90246 5.7114e-02 .  
posting_frequency_bin6-10 Posts -183.9813    43.1881 -4.26000 2.0462e-05 ***
posting_frequency_bin11+ Posts  -650.2180   260.8158 -2.49302 1.2668e-02 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
RMSE: 11,133.9     Adj. R2: 0.071113
                 Within R2: 2.838e-5

The reach per post model tells a different story. Accounts posting 2-5 times per week experience a statistically marginal decrease in reach per post compared to weeks in which they only post once. This negative effect becomes larger as posting frequency increases.

There are several potential theories for why we’re seeing this effect. LinkedIn’s algorithm may show each individual post to a smaller, more targeted subset of followers when accounts post very frequently, helping to avoid overwhelming audiences with too much content from the same account.

Alternatively, the platform may become more selective about which followers see each post as posting frequency increases, showing posts to people most likely to engage rather than broadcasting to an account’s entire follower base.

Of course it’s always possible that there is an issue with the underlying data we’re using or the techniques we’re employing. However, because these are fixed effects models, we can be relatively confident that these effects truly exist.

If you have a suggestion for how to improve this analysis please feel free to share it with me!