Home
All Oracle Error Codes
Oracle DBA Forum

Frequent Oracle Errors

TNS:could not resolve the connect identifier specified
Backtrace message unwound by exceptions
invalid identifier
PL/SQL compilation error
internal error
missing expression
table or view does not exist
end-of-file on communication channel
TNS:listener unknown in connect descriptor
insufficient privileges
PL/SQL: numeric or value error string
TNS:protocol adapter error
ORACLE not available
target host or object does not exist
invalid number
unable to allocate string bytes of shared memory
resource busy and acquire with NOWAIT specified
error occurred at recursive SQL level string
ORACLE initialization or shutdown in progress
archiver error. Connect internal only, until freed
snapshot too old
unable to extend temp segment by string in tablespace
Credential retrieval failed
missing or invalid option
invalid username/password; logon denied
unable to create INITIAL extent for segment
out of process memory when trying to allocate string bytes
shared memory realm does not exist
cannot insert NULL
TNS:unable to connect to destination
remote database not found ora-02019
exception encountered: core dump
inconsistent datatypes
no data found
TNS:operation timed out
PL/SQL: could not find program
existing state of packages has been discarded
maximum number of processes exceeded
error signaled in parallel query server
ORACLE instance terminated. Disconnection forced
TNS:packet writer failure
see ORA-12699
missing right parenthesis
name is already used by an existing object
cannot identify/lock data file
invalid file operation
quoted string not properly terminated

Re: fetch across commit

Jared Still

2004-11-30

Replies:
Hi all,

Have any of you considered using a 'where current of' update
cursor to work around this?

I've used it in the past to avoid both ora-1555 and 'fetch across commit'.

Here's a bit of code with an example, stripped to the relevant parts.

It would of course be a good idea not to commit every row as this
code is doing.


    cursor csr_dblink
    is
    select
          instance, username, table_owner, table_name,
          precedence, db_link_instance, db_link_username,
          rowid
    from data_sources;

    cursor csr_dblink_upd ( rowid_in rowid )
    is
    select
          instance, username, table_owner, table_name,
          precedence, db_link_instance, db_link_username,
          remote_instance, remote_table_name, remote_table_owner
    from data_sources
    where rowid = rowid_in
    for update;

    csr_dblink_upd_rec csr_dblink_upd%rowtype;

    for dblinkrec in csr_dblink
    loop

          if csr_dblink_upd%isopen then
               close csr_dblink_upd;
          end if;

          open csr_dblink_upd(dblinkrec.rowid);

          fetch csr_dblink_upd into csr_dblink_upd_rec;

          if csr_dblink_upd%notfound then
               raise_application_error(-20100,'could not find
current record');
          end if;

          update data_sources
          set
               remote_instance = remote_instance_inout,
               remote_table_name = remote_table_name_inout,
               remote_table_owner = remote_table_owner_inout
          where current of csr_dblink_upd;

          if csr_dblink_upd%isopen then
               close csr_dblink_upd;
          end if;

          commit;

    end loop;

Jared


On Tue, 30 Nov 2004 09:44:38 -0500, Goulet, Dick <dgoulet@(protected):
> Ganesh,
>
>      The answer is yes and no. According to the SQL standard once a
> commit is issues all open cursors are invalid & need reopening. Oracle,
> being the NICE dbms that it is allows us to do otherwise. The problem
> is that the current SCN of your session is no longer the same as that of
> your cursor and you've released all interest in rollback segments before
> the now current scn. Although it is much more common to have the
> problem caused by a second session updating the cursor table, the insert
> table can also be affected. Delayed Block Cleanout is one potential
> culprit, the second can become a integrity constraint check or an index
> update or some other matter. OTS has at one time pointed me to a data
> dictionary update as the problem. Namely if you create the insert table
> with very small extents then getting that third or fourth extent can
> cause the problem. The answer in my case was to create the insert table
> with one very large initial extent.
>
>

--
Jared Still
Certifiable Oracle DBA and Part Time Perl Evangelist
--
http://www.freelists.org/webpage/oracle-l