Asked to Not Use Transactions and to Use A Workaround to Simulate OneHandling exceptions in stored procedures called using insert-exec blocksSleeping SPID blocking other transactionsOracle GoldenGate add trandata errorsSynonyms and transactionsSQL Server: affect other transactions?How to handle errors in a transaction in a stored procedure?Issue with Table Naming Conventions and Policy Management in SQL Server 2016Investigating errors from strange query70-761 Practice exam Question about @@Trancount final value

Has SHA256 been broken by Treadwell Stanton DuPont?

Double it your way

What is a realistic time needed to get a properly trained army?

My research paper filed as a patent in China by my Chinese supervisor without me as inventor

How to stabilise the bicycle seatpost and saddle when it is all the way up?

How do I get rid of distortion in pictures of distant objects photographed with a telephoto lens?

If the gambler's fallacy is false, how do notions of "expected number" of events work?

5e Level 1 Druid cantrips

Why is the Digital 0 not 0V in computer systems?

Why is template constructor preferred to copy constructor?

Mutable named tuple with default value and conditional rounding support

How do I say "quirky" in German without sounding derogatory?

Does deswegen have another meaning than "that is why"?

Should you only use colons and periods in dialogues?

Ambiguity in notation resolved by +

What exactly is a marshrutka (маршрутка)?

What are uses of the byte after BRK instruction on 6502?

Bash, import output from command as command

In what state are satellites left in when they are left in a graveyard orbit?

Why is this weapon searching for a new owner?

Is the Dodge action perceptible to other characters?

Is a suit against a University Dorm for changing policies on a whim likely to succeed (USA)?

Write a function that returns an iterable object of all valid points 4-directionally adjacent to (x, y)

How can I discourage sharing internal API keys within a company?



Asked to Not Use Transactions and to Use A Workaround to Simulate One


Handling exceptions in stored procedures called using insert-exec blocksSleeping SPID blocking other transactionsOracle GoldenGate add trandata errorsSynonyms and transactionsSQL Server: affect other transactions?How to handle errors in a transaction in a stored procedure?Issue with Table Naming Conventions and Policy Management in SQL Server 2016Investigating errors from strange query70-761 Practice exam Question about @@Trancount final value






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








7















I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!










share|improve this question







New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

    – i-one
    5 hours ago







  • 2





    No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

    – Dan Guzman
    2 hours ago


















7















I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!










share|improve this question







New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





















  • The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

    – i-one
    5 hours ago







  • 2





    No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

    – Dan Guzman
    2 hours ago














7












7








7


1






I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!










share|improve this question







New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I've been developing T-SQL for several years and am always digging in further, continuing to learn all I can about all aspects of the language. I recently started working at a new company and have received what I think is an odd suggestion regarding transactions. Don't ever use them. Instead, use a workaround that simulates a transaction. This is coming from our DBA who works in one database with a lot of transactions and subsequently, a lot of blocking. The database I primarily work in does not suffer from this issue and I see transactions have been used in the past.



I understand that blocking is expected with transactions as it's in their nature to do so and if you can get away without using one, by all means do it. But I have many occasions where each statement MUST execute successfully. If one fails they all must fail to commit.



I’ve always kept the scope of my transactions as narrow as possible, always used in conjunction with SET XACT_ABORT ON and always within a TRY/CATCH.



Example:



CREATE SCHEMA someschema;
GO


CREATE TABLE someschema.tableA
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColA VARCHAR(10) NOT NULL
);
GO

CREATE TABLE someschema.tableB
(id INT NOT NULL IDENTITY(1, 1) PRIMARY KEY,
ColB VARCHAR(10) NOT NULL
);
GO


CREATE PROCEDURE someschema.ProcedureName @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT, XACT_ABORT ON;
BEGIN
BEGIN TRY
BEGIN TRANSACTION;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);

--Implement error
SELECT 1/0

COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF @@trancount > 0
BEGIN
ROLLBACK TRANSACTION;
END;
THROW;
RETURN;
END CATCH;
END;
GO



Here is what they suggested that I do.



GO



CREATE PROCEDURE someschema.ProcedureNameNoTransaction @ColA VARCHAR(10),
@ColB VARCHAR(10)
AS
SET NOCOUNT ON;
BEGIN
BEGIN TRY
DECLARE @tableAid INT;
DECLARE @tableBid INT;

INSERT INTO someschema.tableA(ColA)
VALUES(@ColA);
SET @tableAid = SCOPE_IDENTITY();

INSERT INTO someschema.tableB(ColB)
VALUES(@ColB);
SET @tableBid = SCOPE_IDENTITY();

--Implement error
SELECT 1/0

END TRY
BEGIN CATCH
DELETE FROM someschema.tableA
WHERE id = @tableAid;

DELETE FROM someschema.tableB
WHERE id = @tableBid;

THROW;

RETURN;
END CATCH;
END;
GO


My question to the community is as follows. Does this make sense as a viable workaround for transactions?



My opinion from what I know about transactions and what the solution is proposing is that no, this isn’t a viable solution and introduces many points of failure.



In the suggested workaround, I see four implicit transactions occurring. The two inserts in the try and then two more transactions for the deletes in the catch. It does “undo” the inserts but without rolling back anything so nothing is actually rolled back.



This is a very basic example to demonstrate the concept they are suggesting. Some of the actual stored procedures I have been doing this in make them exhaustively long and difficult to manage because “rolling back” multiple result sets vs two parameter values in this example becomes quite complicated as you could imagine. Since "rolling back" is being done manually now, the opportunity to miss something because real.



Another issue that I think exists is for timeouts or severed connections. Does this still get rolled back? This is my understanding of why SET XACT_ABORT ON should be used so that in these cases, the transaction will roll back.



Thanks for your feedback in advance!







sql-server t-sql transaction






share|improve this question







New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|improve this question







New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this question




share|improve this question






New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 8 hours ago









ForrestForrest

362 bronze badges




362 bronze badges




New contributor



Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Forrest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

















  • The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

    – i-one
    5 hours ago







  • 2





    No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

    – Dan Guzman
    2 hours ago


















  • The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

    – i-one
    5 hours ago







  • 2





    No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

    – Dan Guzman
    2 hours ago

















The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

– i-one
5 hours ago






The workaround that is suggested to you makes possible (at least) violating "A" of the ACID. For example, if SP is being executed by a remote client and connection breaks, then partial "commit"/"rollback" may happen, because of server can terminate session between two insertions/deletions (and abort SP execution before it reaches its end).

– i-one
5 hours ago





2




2





No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

– Dan Guzman
2 hours ago






No, the CATCH block never gets executed in the case of a query timeout because the client API cancelled the batch. Without a transaction SET XACT_ABORT ON cannot rollback anything other than the current statement.

– Dan Guzman
2 hours ago











2 Answers
2






active

oldest

votes


















3
















You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.



  • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


  • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


  • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


  • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.


Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can find a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






share|improve this answer
































    -1
















    If transactions are blocking each other, above isn't a good fix to the problem.



    Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions



    There could be inefficient code within the transactions. Are there parts that could be moved outside of the transaction? Select statements shouldn't be inside transactions. Only the actual inserts/updates/deletes should be in there.






    share|improve this answer



























      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "182"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );







      Forrest is a new contributor. Be nice, and check out our Code of Conduct.









      draft saved

      draft discarded
















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f248677%2fasked-to-not-use-transactions-and-to-use-a-workaround-to-simulate-one%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      3
















      You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



      Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.



      • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


      • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


      • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


      • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.


      Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can find a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



      In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






      share|improve this answer





























        3
















        You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



        Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.



        • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


        • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


        • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


        • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.


        Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can find a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



        In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






        share|improve this answer



























          3














          3










          3









          You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



          Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.



          • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


          • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


          • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


          • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.


          Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can find a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



          In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.






          share|improve this answer













          You can not not use transactions in SQL Server (and probably any other proper RDBMS). In the absence of explicit transaction boundaries (begin transaction ... commit) each SQL statement starts a new transaction, which is implicitly committed (or rolled back) after the statement completes (or fails).



          Transaction simulation suggested by the person who presents him- or herself as your "DBA" fails to ensure three out of four requisite properties of transaction processing, because it addresses only "soft" errors and is not capable of dealing with "hard" errors, such as network disconnects, power outages, disk failures, and so on.



          • Atomicity: fail. If a "hard" error occurs somewhere in the middle of your pseudo-transaction, the change will be non-atomic.


          • Consistency: fail. It follows from the above that your data will be in an inconsistent state following a "hard" error.


          • Isolation: fail. It is possible that a concurrent pseudo-transaction changes some of the data modified by your pseudo-transaction before yours completes.


          • Durability: success. Changes you make will be durable, the database server will ensure that; this is the only thing your colleague's approach cannot screw up.


          Locks are a widely used and empirically successful method to ensure ACIDity of transactions in all sorts or RDBMSes (this site being an example). I find it very unlikely that a random DBA can find a better solution to the concurrency problem than hundreds, possibly thousands of computer scientists and engineers who have been building some interesting database systems over the last, what, 50? 60 years? (I realise this is somewhat fallacious as an "appeal to authority" argument, but I'll stick to it regardless.)



          In conclusion, ignore your "DBA"'s advice if you can, fight it if you have the spirit, and come back here with specific concurrency problems if they arise.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 3 hours ago









          mustacciomustaccio

          11.9k9 gold badges29 silver badges45 bronze badges




          11.9k9 gold badges29 silver badges45 bronze badges


























              -1
















              If transactions are blocking each other, above isn't a good fix to the problem.



              Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions



              There could be inefficient code within the transactions. Are there parts that could be moved outside of the transaction? Select statements shouldn't be inside transactions. Only the actual inserts/updates/deletes should be in there.






              share|improve this answer





























                -1
















                If transactions are blocking each other, above isn't a good fix to the problem.



                Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions



                There could be inefficient code within the transactions. Are there parts that could be moved outside of the transaction? Select statements shouldn't be inside transactions. Only the actual inserts/updates/deletes should be in there.






                share|improve this answer



























                  -1














                  -1










                  -1









                  If transactions are blocking each other, above isn't a good fix to the problem.



                  Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions



                  There could be inefficient code within the transactions. Are there parts that could be moved outside of the transaction? Select statements shouldn't be inside transactions. Only the actual inserts/updates/deletes should be in there.






                  share|improve this answer













                  If transactions are blocking each other, above isn't a good fix to the problem.



                  Here's a way to help alleviate the blocking: https://www.sqlservercentral.com/articles/using-indexes-to-reduce-blocking-in-concurrent-transactions



                  There could be inefficient code within the transactions. Are there parts that could be moved outside of the transaction? Select statements shouldn't be inside transactions. Only the actual inserts/updates/deletes should be in there.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 6 hours ago









                  user238855user238855

                  491 silver badge4 bronze badges




                  491 silver badge4 bronze badges
























                      Forrest is a new contributor. Be nice, and check out our Code of Conduct.









                      draft saved

                      draft discarded

















                      Forrest is a new contributor. Be nice, and check out our Code of Conduct.












                      Forrest is a new contributor. Be nice, and check out our Code of Conduct.











                      Forrest is a new contributor. Be nice, and check out our Code of Conduct.














                      Thanks for contributing an answer to Database Administrators Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f248677%2fasked-to-not-use-transactions-and-to-use-a-workaround-to-simulate-one%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                      Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                      Ласкавець круглолистий Зміст Опис | Поширення | Галерея | Примітки | Посилання | Навігаційне меню58171138361-22960890446Bupleurum rotundifoliumEuro+Med PlantbasePlants of the World Online — Kew ScienceGermplasm Resources Information Network (GRIN)Ласкавецькн. VI : Літери Ком — Левиправивши або дописавши її